var/home/core/zuul-output/0000755000175000017500000000000015137577177014551 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137605656015507 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000256417015137605575020303 0ustar corecore} ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB >|x6b}Wߟ/nm͊wqɻlOxN_~𒆷7̗8zTY\].f}嗷ovϷw_>on3cvX~egQBeH,nWb m/m}*L~AzHev_uαHJ2E$(Ͽ|/+k*z>p R⥑gF)49)(oՈ7_k0m^p9PneQn͂YEeeɹ ^ʙ|ʕ0MۂAraZR2¨(PQ wFh k0&S V3M.*x6Ql"%qYHzn4}*|dd#)3c 0'Jw A57&Q"ԉQIF$%* 4B.K$*/Gmt΍L/1/ %T%e63I[wdt6o[ .`:J ]HmS>v5gCh31 )Kh3i J1hG{aD4iӌçN/e] o;iF]u54!h/9Y@$9GAOI=2,!N{\00{B"唄(".V.U) _.f*g,Z0>?<;~9.뙘 vKAb;-$JRPţ*描Լf^`iwoW~wSL2uQO)qai]>yE*,?k 9Z29}}(4ҲIFyG -^W6yY<*uvf d |TRZ;j?| |!I糓 sw`{s0Aȶ9W E%*mG:tëoG(;h0!}qfJz硂Ϧ4Ck9]٣Z%T%x~5r.N`$g`Խ!:*Wni|QXj0NbYe獸]fNdƭwq <ć;_ʧNs9[(=!@Q,}s=LN YlYd'Z;o.K'[-הp|A*Z*}QJ0SqAYE0i5P-$̿<_d^"]}Z|-5rC wjof'(%*݅^J">CMMQQ؏*ΧL ߁NPi?$;g&立q^-:}KA8Nnn6C;XHK:lL4Aْ .vqHP"P.dTrcD Yjz_aL_8};\N<:R€ N0RQ⚮FkeZ< )VCRQrC|}nw_~ܥ0~fgKAw^};fs)1K MޠZ 1VZ|&הԟ,Tصp&NI%`t3Vi=Ob㸵2*3d*mQ%"h+ "f "D(~~moH|E3*46$Ag4aX)Ǜƾ9U Ӆ^};ڲ7J9@ kV%g3T0|9ē7$3z^.I< )9qf e%dhy:O40n'c}~d i:Y`cФIX0$AtĘ5dw9}ŒEanvVZ?c}!wO,ƍͩ?9} [oF2(Y}Q7^{E}xA|AŜt;y}=W<*e'&Ж0(ݕ`{az^su/x)W>OK(BSsǽҰ%>kh5nIYk'LVc(a<1mCޢmp.֣?5t罦X[nMcow&|||x:k/.EoV%#?%W۱`3fs䓯ҴgqmubIfp$HhtLzܝ6rߐǽ0 2mK:ȔsGdurWMF*֢v|EC#{usSMiI S/jﴍ8wPVC P2EU:F4!ʢlQHZ9E CBU)Y(S8)c yO[E}Lc&ld\{ELO3芷AgX*;RgXGdCgX JgX2*Ъ3:O7ǭ3ږA :}d,ZByXϯ&Ksg3["66hŢFD&iQCFd4%h= z{tKmdߟ9i {A.:Mw~^`X\u6|6rcIF3b9O:j 2IN…D% YCUI}~;XI썋Fqil><UKkZ{iqi :íy˧FR1u)X9 f΁U ~5batx|ELU:T'T[G*ݧ ؽZK̡O6rLmȰ (T$ n#b@hpj:˾ojs)M/8`$:) X+ҧSaۥzw}^P1J%+P:Dsƫ%z; +g 0հc0E) 3jƯ?e|miȄwfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O .|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{v@`<ۃ7|qk" L+Y*Ha)j~pu7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧc/"ɭex^k$# $V :]PGszy iuKVMٞM9$1#HR1(7x]mD@0ngd6#eMy"[ ^Q $[d8  i#i8YlsI!2(ȐP'3ޜb6xo^fmIx nf^Lw>"0(HKkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fGgKQӦ4}Gn\^=-Y5PI dPN6 Ozځ/פ|5) F[ڣ$2*%&h v%9HN H~Q+oi?&۳)-nqK?2ސv/3,9ҮT9Cef˝49i.2DxatC<8iR/ƬйR֌vN8J"iJ. T>)qaY4ͬlyg "]BvW#99`TegõII kюHLa^c&/H^FFIu`2a$mc Ry+R:LڕDܓ>Y:]t.+|PT6=qWe0NƏw<6o3mv8k vGOfpEOkÈWȤMف lOc;SR&.w,qk>MPs+Xh4iyuGRd֞q鮺]m S{}]U kV0/ŜxtADx"Xh4|;XSxߵă@pE:y]/"(MCG`ʶϊGi+39#gNZYE:Qw9muB`9`LDhs4Ǩ9S`EkM{zB<˙ik; JD;;3!4 2Y.$Dwiu|+lO:k$]ԜYLUҞ6EmH>azʳ/A+ԀZk"f`.,ל{=wh|_qYj5M{K$gv>cDp"'0޽5xCNQ1G2})*'>fC۝'*)"5.E2IeD 2.ZdrN6Uœ=n8D-9޵JKw5ُJ,􋃓ZUꋼ0b1f87GՂ 1t_o}{Mr7KO0Ao-Y*Is\S:JzA(:i!eҎ\,f+,Ąt78~ڋ~?[F^.A'!,iGow3{'YToҝf5ޓ[he>=7S8DGZ@-#]f:Tm?L{F-8G#%.fM8Y='gیl0HڜHLK'Cw#)krWIk<1څ 9abHl:b3LjOq͂Ӥ=u8#E2;|z꽐vɀi^lUt␚ɓW%OVc8|*yI0U=nFGA`IC8p+C:!}Nh,mn>_MGiq'N~|z`|mu}r:"KiyGҪ$& hw#4qn?ܶХfm_Ov^ܶ[6j3ZN9t9ZMMM)I[Rχ/C|W䳮yI3MڼH9iEG&V 'x`u.̀ab7V<*EzfH{]:*6M x-v쳎M'.hO3p-IGh ܆hR ]zi2hB9'S_;I/d0oIU:m/~[*K1QA="D:V&f:{7N>^uU` c/X)mS5KC߄":{H)"%,!3w{"ZWÂk>/F?RJ>FIY*%5Hg}3Ď89؟N/pgÞ tJXB-Gjsٶ 3Gzp؍H|*cyp@\첹,[up`uV,\KCB\qGiW痃[?i?S{eϻl71X:݌>EEly(*SHN:ӫOq{{L$?Q{϶(F_Ej>3mqfΤP-j)H˧&8?a?2xĐ+EV؍x0bv6 fd1^ 2ӎԥ sZR cgu/bn/34'h9Dݥ:U:vV[ 'Mȥ@ەX㧿-p0?Q6 y2XN2_h~Cֆ֙82)=Ȓ7D- V)T? O/VFeUk'7KIT, WeՔ}-66V؅ʹ;T$pZ#@L; ?0]"2v[hׂ'cJ6H4bs+3(@z$.K!#Šj2ݢxK-di +9Hᇷ絻+ O.i2.I+69EVyw8//|~<ëng)P<xͯ~? fp,CǴ_BjDN^5)s('cBh+6ez0)_~zJz"ё`Z&Z![0rGBK 5G~<:H~W>;ٍVnSt%_!BZMMeccBҎÒJH+"ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKB<չw"|{/MC8&%Og3E#O%`N)p#4YUh^ ɨڻ#Ch@(R &Z+<3ݰb/St=&yo|BL,1+t C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$E)~?wy,u'u() C>Gn} t]2_}!1NodI_Bǂ/^8\3m!'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^"Lq2E3,v1ɢu^}G7Z/qC^'+HDy=\]?d|9i,p?߼=\Ce"|Rݷ Q+=zxB.^Bld.HSntºB4~4]%.i|҂"? ~#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5H+v_uM Wi·yT"^'~i6֬:v~m!lk҃=pnUגZ6p| G;;74^l{Pclwů Հ}xcSu)6fbM/R(*ȴd.^Qw %"=nluOeH=t) Hİd/D!-Ɩ:;v8`vU~Ʉ!hX #'$2j1ܒZ˜bK@*`*#QA 9WykGk,8}B6{/) ݆Y~ 1;;|,ۇ=sxy+@{l/*+E2}`pNU`ZS̯窜qN8V ['4d!FmaX-6 y:1V(!L7,RPEd;)QϢ +RlWDžuF7LFֆoM~ar*EtIbW>jqour?qzJJaQ#-n`/$fhnqgTĔO5 ꐌSYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; K>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;0)N.w7ߍ|+qUߤ^oå~4en\.cY[s'wSSۘf ?.D s}Y~/J[}jX^ޗ_-/̍ݥ*n./cus}]\>\\^'W_nAqC_oO-S_sOq?B}mmK2/@DJt}=xL@5MG0ZY,\S Eb uw:YɊ|ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^6eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j3l૎S1@:G|gRcƈ?H(m>LC,HI~'.Op% ' c*Dp*cj|>z G` |]e*:nq!`{ qBAgPSO}E`́JPu#]' 3N+;fwt[wL X1!;W$*죓Ha-s>Vzk[~S_vD.yΕ`h9U|A܌ЃECTC Tnpצho!=V qy)U cigs^>sgv"4N9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷(ȊD0M^`MDN74Т C>F-}$A:XBgJWq&4ۓflq6TX)ى?Nwg>]dt*?Ű~{N_w7p682~ =WBX"XA:#u-9`x 92$4_>9WvTIj`+C2"s%DƖ|2H\2+AaTaBˮ}L@dr_Wfc>IdA Od[jlec=XJ|&+-T1m8NP$%s,ig\Z:h Ћ߉n!r}_\ \5 6 d#=&X^-kOwĝJO\Vj; )!eoB4F\jtctUb.L[3M8V|&jZz/@7aV),A[5TpUZL_?CU0E [%W%vl x٘3܎y,< )i7 Ո: tC`\?c%v7\Ct!$9iç$><+c~݊lz1H[E'2/clQ.I`AWOlw&5fH n`gMytdx)lwAK~GgbJI-tq5/i ?WǠr^C/1NEU<=co(k0Q~wˌ\g,\ rf\PUH,L#L7E"`0dq@zn~+CX|,l_B'9Dcuu|~z+G q|-bb^HcUha9ce1P[;qsA.Ǎ-]W‹y?ڕ^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?ç&+|t><,BLqL򱷬dS{X6"X#-^䀕#{К4i̎'QIc(<ǩJi lc*n;YKOIXA6,ْFCpgUwkiF.Ջ[M1n%8(cC?΄L$i,2%2TIe[C)JY UIbk懪9#BRĥߑE^ĥJr *J)!yzF*3VR^Ef1(S c7 U]A<#ˡ V`4DdzljBpyS!ofg;GIv=T 3 ?))vSRvb@ɩ,#meFFV^zli.lÍ+ŕcV$]䍀zن"f.B=2 ?:6Lt ߵC& 5sHX7fZ6o }c1y\5-ϊj9zvޛ5Ke>׫W2{/˗\XC!VB(lay"ꙧﮃxbA0~hǓn,2"k۱4Ĺd oZHsM?&H'3"a^~oY!w!>aB5C|ԣ[ b(f랯YAhjuaNH~w<,-MڗZ ^ M -]SH;&-|2i_i]Υ_2{tó^;O<*㰨-( ~"ZĘ&q~ӡo\eO!Ϗo"<^k_L{7ǏKަ?ZxfjjZku vr w|Cɇsr<:k~~Ƌ  oxR,y*fZΞHȣ/ِL77>76=G푃,{k_Y2t ĝ G4ci1Ol%gV*u=Gs6wLm\sIx9G)zY#_~/bɗr˪Ò1eΧ)Vі˵bGTUFN%$i!X̓ UQwv$F[ة-HP|\6&EN2Eg45 >eBV.0i@5+E5eFv^WSx<~G`1w9:GwrqSME5Ǵ*$_-t9(: k /0njwbȹ4)h6s*[`BKL«:'A!}:G3BF<}G /g`>}|w`i(cSHGa?NgmFTFp|kܧI<$҇ӋX/gu Q[y6XyҐ:ey:,(OS]?'7ϚTYf#d~kkUd45I;}iE<ĉ1:Oh Oy΃r'(bw]N6]yۑ|4/7|'4ŲR}]{_'t.ë~$Ti[ȰO{NNz g>2dBp*|C$C]fSD|?OBG~AMES!K5䢈EoQ K-[SMrm#è:ۿ/^k^i. xHЎf5 Sۡ&2YPZ xT%(%nG %-xcz. a]U!:sY,$(>!Fo8^ʺ}g) MH~ڝOk54>!'Yc o4Rǚ&ڌ'Uj[w`.Lb0x_`$jneV*Leʻ4*5SWQk"Ma2GeR '"wAZ%SRξtXt2.+ -M0F7K9ȻΔH\|{xjY2 _cReCX+O((c Yi鬏Pe'?0*qZ3#3D Z** h˙5GuRղ $ZZ}͑چe]$Wk ЄP57Bl6xOsGNıϛ<{nd}J+% BnTyp kxe-PCOS_bpECaeqjI;B)<UxTD~*ڼh,=ّbxR{V_[K"{xAWUtsׅԪe~\]aTM-32S&4 &j#-l5>Lyl&VZgiVȵT?LsG4ª3qՀF/655-ZZn5U!&ZZTȿkbLmRͅj+ Lgu)1Q< hL]y\n7ҵE\5n1E,fF]0oт8w{x6x#qr '%}XV==eڨXItm Loܕj_eApY!Y6 -m.%jʀnI*Mw]lL_ DD) vsquet456!Uba"4V`Z]}+pd.盱M Lse`)pEjcM-A[;PM?϶#yv:Tq4wh+YN4XɛK! L{؇0MVOHV ъ1ڝJnM;(ہ(n9HR.MSw{@Y"ܴm_h(E 9DV~ggl2W>})C-q1O AmS7βJ!6|sl80A[mKw %_b_(Dfk9^?z汏-8vg𝬼q13 -6wc2bJw7ֹ --e{-2 . 4Bmp0YGѭiݝ&`?,̵55mr圛 i܏"2/̲#_M#7#u=+  ] Ow"2٪0dwMзLw!5tg\ySQćVڒ Uk(6lUĺS+@7d uHvcicc*"(+2KQ'mYqΩM[Dt :@׵d=UYfM-҈i^S+[?Ilf[@!2JFzxtp}dzvv;O^i4K^Jn(i52.!-]~Fb!:Y]*K"_]Lc]vp^'I;Wv>x]NntqU="8>ZQb{7/z8ΰ  ,ʔ#]4^,ogQ1陲-Y޼v+kBIygҬN}|UcJy[52EOI>:qofؐ>6u}Fs.1M5/(ؘnn>n~uc{Xs܃1=^Nc0if{&W4 05Y#NcҠfQ|"L3orO3faNMe[ײRu'1e&jkme"A̻bܔ Ŷd5}C NfȸeΡ܈:I?lcer0%36e7s$2u~klEGX1iZO͝! MBu>ҽҷGQis vSK(ƀ5zڧi|<]0MLb7zu0,\8T=ܓy='$ 0PrLG?lK|4vp"k.D0O<f RYYES%oy}uEx˧$27GpƨETѴO4ڍqHcn_ȼX|G?M`}%@p)\&kճWeZ|pR*;$Ql~b>EP{$"MY hUW\Efqh* _A=V],(e)OA|t`>;xt!wG8}: 4L_:X׋M=KزB<Ѱ|֮0 n;b(j'o 4bbۉ  -8~L#OZ]s[(:EBhVFA[~BZnx/fij}fF#W]+~r3a.F#$%spFW;3 F㨭 k_(y@ihֶ@(]HRď s,Vj<@kw#^ 0ΨӕBF1ZX'!SbHqj@o @ѰôUJQ] v@ߵEwdQ8T̸E_t`V(Y' +kl]Dՙㅫb ]p}p]1NvwyB7Ȯo=ű=fw${P?oXP'Ll]0-Nr\7YyaU6S\-+%qu4EH8O˂F`6K28t*U3kU u;܌ (yk8u+kX`v-Vq#x+ TVum-7V3 Ԫ.Hd?͝ULόds9Sje({@"T8_*38X;j3K eA{ UGgW$/N910mT]Kx)8߉nG@Si:΋G4pp msfqW<77'||u|d2ï4(/N} (0?o3şbhu{.q\ܵz7jDP8C87q !P܎m8%Ċ@9 Ll'IHu Ҙ93 Ns@m+E8EByYߕLXq/pC`ҞL0? Wo<cꎖ] A_[̳ni=ӛQ\ajtOړx9Ew 3(|r#->XOsB銻V`a9Sǟ e$نs\3FS!,<|_4dA XAxzH#rߦpO .@j 3}~$h mJi8CZˆ"Yy\~ ^ B űϖuv`oӫֺWi;a=m넰%\t<\y ƷwmksЀ3onra)+=a&&t.'B֓cߕf)GVa䅱_>|QfIzKb+ܢN.on6#޶-߫d _һt2`3h/Qr,^={5AcER=gI 6lzц$ By!-ʶ mO(ۏP#lBWm3lAe{*uXV:[.C<یLZjH2jG#:˄:[lOΎ:[.nA=~;nALzH2G#eB w$oAhLhHh]8x,- @/~BD #M>ܓRFc0rH;GeZ|5q)=\&jD?gTӣc=^: =\̢lsޟyLL/d 2/x/pzOkRrӠh0?N+Gڅs8+1WQ|W#m2 i}|l"@ /L"Weϳ"1BA!0b\=85cT9~X.CrO`2 tDt~oG:z%_ |4A aYD MsxcTSG՗^9JoH 3/3udC?^l_ţ Y!aX#Xʡ tFŀDWQrJHx2o}GrjXlJmg;Sh+.f`$||GΟ#s۳]^!0>Oߒþ>&|L1g"kȘNe9%s̑VrsWn]->Y֙UFe?zGQr +/L0kdP~4A׃lڐ@1t!RNWڈnrf [ g i2nAnqvi\hkqRge\fzpp~r RJchzXXd8A ZB"GWVhAi4#Sis촖gv(NH@oj⮭|Ix┆F!VmmH'N2HDV Kϴ<`(e[ bUxjh0:>0hO  \q@k\9b#t.N;x:n4p-7QM.#/Gʗ;Fճf!uK؃QD:Efl3Y${5<1W2fsؙ6-bǣ {a7DzUum+.|Y>RU$k8zʍTijTi(ͣg+W!\L'-̙v<:-TpQʚ]mɆˑ闰@dE\KgL>(56,:j7`X KhaGU>l*-q~d.fU -a KMRD*σJ~DJ1W!;q:4oe `Z>a7u3Lu2X>YyVTi C4u=W[DU8P)-J]#:=j`e[l^0Kp>=o \껫kT  J*%I&ڊ).OXjSx򕚶=*Yc\[9?0ۻ<ڋ5RjPh UkU3Hg] jy^j)ISJOV?eln򼀮|g$AFREVg&"Wi{j-9&ᨮi~Byt9=g&@$%YD@il\`6`qP 8HQʽ-\y #"C%_$=8z+DFt Z;ZYuVwe~ȓ:2d}ЫDd#O!%+rjhC#qEO]WljK ?WYj/9ubbD4gt.Yr/WyXm? NF?jS> +ۏٮ:Is6Vq/7~;xEQX^^kW|@9|_7=48H>}_mz"7^H':˅̫Yjoq}5=G񫿱mQw7vtD;yk^$فh^x2ps0Z@T|pWc@ߔlOpPMҴ"5Wrְ̘g101ljRF2m-'/&"ig8a3X,mXUs]~ CeF 6-f)ô%@eZ>_#0\89!,ip4cq5hϠ4ZgL{]m t#%GBEqe%IsuM[38H:$8> MB712U2|hbİt !,{ȴrKsHs1MJF@K` N+1δr'o UL" ,w /ꐒ#FIpla7\9V 0csmNh+~K`$i7nBm-,zO+c^Fϣ qh1b^$ @&FZj9W|4fK>+8Hb9m}͎tJnUyӘ[)l &i%O. #cꣀ;%H7Cpzll8\`ࡅR_bIUI,CVf ,c/cYIi$B?YeRpǠvwXt5zGCë]2KJc֣؟g[~L'U^WFذP3sAݬձDͦ тHEa#YJٱM)!5]5‰+,it-p4|`@8=Hu"S,V5l1F {ԌMq[yTD.8TT!] I,oV,6qp}BcH8̨<MM+ gH0h 4LWҡ8kT)_V$(Fa\l7w+\U*!`Ĭ1YP"8$?՞MD6XɆjpomߗ3 ^~uV^NJcJB"Kp3D_gc. K3nޑ(㨄Ӧk- n4L +@l#By+ܢ,Ļ$hΕ`^T7|ov~t$8#8z ̷~T%ސ! 3$8YEU`F0g̕ EDZ-pIoq@񰨼DfͱLǸ"0HoO͹c6gf~H:]jp(U-1\S]iRheqt{$8ko+"1AV >bH ZtnB Fy^w$8S}CZa3 vu2WZ+YK=hY(`i7`*1f|$NuWW!1!-G(kF( G&1Fۣ/U(=rհh!t *k!yH偬N~f>6&5vooHptc/AgTjDKd 0U SF %ܢg> f@X.=.N&Oe$/& =5Y0 *%s헇K#>zp:Ҟ~Lnu/$Y )bN82(5\,1`|W*eS3qJࣈ~l=#,nnnIpTfQ/mƘqjm̲^LPQ 34){)TAH5ѭ͢暏jEuGe/t|S? x“UBTqSs'/YUm1sb}GPMfvw. eMVE}1o,A3yw$8.%od< IpuQ8%24M}iQ\/ǸS4rC\R|xe9 6"d?>>H[}{up%Z5(h? 8O-myݎkP )5[7cƌ׽/?>ipT=g# TcR8(ٹ0=ײg[Kgg8b&hOO8{0izGތ>z{k3ZU9B̑r f#, |% G R*E̹R^r)t )4'=lJ *!E[}<{nwxHp=z4\5]QcBՒ#')z[_"N\!tt@)1n?6Noaz}h9W"Rx%ΡբTJ[>xa*l xz}~kW_:d0E:l(y8xF|{1;-j+(}z/GL}#CHbjO>÷EgW)8Aet5pcͯ!$=KXO&m̎ eS3K5SdtK["PT~Qc'5|@HpMwGEϛ:qCrќ1"#/.ұS9|ȝ葵-tuKK1Di3>vL{_<'nʡSL>ez0b LZl0͊1YgN[ә[ }3odRZB2`*K1"AMaRVgF\TQ7ӑ[gzp)EfJÅ5g3nyR ,٭#UO;@8,dk3:@L& ==Q/h!Dnh+l5%O34~)aF>h0=1k=Vgb G߄W0<?—`Ѥ &ʚP.Yd&coϝ´m2Sh% SƑ쇓hqQV)~l>?L$9F]NڗGiUwL3H\\۬r^l"-fڎ7!Bcc?wh-,jsK}Й+Oc7^R wt~oC"FwPsy-525&R8|?т,xDeSyF/:>&1%M$%9M9+nwuq%F^ `MDbw2P:7T`(F z~Գ>3ס8\<6tx鿋#.: XH{o Itk>Y<Ƿ?>U|Mr{׋Opu. Hޒi +kIKsawoU˧y:9Y~p]E_t4D~*My/dN&w~O g Mσ3>+mhA:`Đ4@^`D;:|VE۽LzoF FRsqxkBEolUF3`Fz{;0l?U Q.M; C=Q}ы Ҥ;l(?\  $za3RlT#2Ke0gg4nnd{ ~)|bcg`tCzmP?/ ϑ^|ǯUO|9.;_/Q f`n>@oWu &m &-0Zk}3 "4#!G5Ð,zK[MACPΏ4uMzKĵnHľS%m .io ֠=bD \h߁c6vBJ$írD6l +uAX% Ė2!V8WiѮ#Z&0-rf*qe"1DHZNq^wbv'>,h+MejEej)j5HTe٣Xs^ѓH!kXh)UQYZ: p-bsk5&{KGKY R[G94X)-=/~.,M'卖Sk>E!4hE}䕢; r>xӰݛ/\wOE-˨uggȚ~x}qkZՃ_nXtN0Վz4O,0k+1Z) ~S}UebUS}%PI@aD1()Q9V9eK-#luMTJZk|J +@>S; &Ox h~8osbQg!gAjl!}7;7J5W({&eSՄ,ֈIM(^ Tx.3M<^dEOPE$O]IYj-u"F'c>?kZZZw8 ;&O7`,mvZtQ?#N9!'M܎b'<$;pa]_wFxs'\{rX(jܱ jmV(XIGsii!+u&Q"O]7 qMiV={DqEk /@T]e9BrRZۛcdQr/ݿ_N]_Zs&4MmƂ^b?ū>Tuf=hD=ie@p&-5OK#/TT(8o'_$9tj`\NyQײcp\7-ܯ5GX}ĂzP~ro>aiЂa;gܚڻײ"5WH_h($?ycQa }-/m[aTiQ ᘀG/ c_(Į} h+k~M^{ZQпbƻN^uLkN#yzos/EӺ}y0j' ^SPeA`ݞA\y[i8Ş2x^WqPjk9p~j̓A~{Mvlpqum6n{(*bv!\8 ,?ٍY^T76;;mc*1I,QV%[LƔ0P&Ra4Iy⇂(InȦG؆Cba)0 _ՇN ƝoobO2Vu@\RcdlVgw\֢<)$N^ณP1Z2v2vv\wdwVkDzݗܗR=/恑5N{r1UqJu((;aCS)$f1bV 1x8"AܳxmشF.i[8DW-u&b*~}3qՑbHu`>}_?R2sM c~0y8;,?W1Iq}*g|t8›^xɚ$"A HAPm\GsUs4V  / 횙<̲aFHHv b%:J;RcoVLJ| 8 \ %Y <9 aF.qWK܆9@E.IsvNSK 5\W~g؏K ZS{38Z9$}"]colwW!/h:1>&oPR1p\_А)L97Ԍ^Ӂ32 G{/OLx, k5m(tpQ_ &}g ŏ4%`r7X#Ol&ǝHZBdi Z \d SjZ1>5 ek$ Q" '?7pkmt *4zOÄ7Ǹ> 4uDjp|_`Y(O?5$ w@}3:c U;B#^S>jW'Esb1nR[ 1lLPbutvs$N :!3 Ccxe 33g F[phɛQ'<1--_ncE iW  •=fΌN$" $+FaW( H y,e4ƍT<7@abxCL{7Ц8!J Nœ(PB4t  >֞ĐƔQ\,=6@dq̹%^HC1OLh))-jhBvV.';3^BRıK`9*7`cjT‰NmnȎV-Qld:`;O5]Aܘh‸\h!nk8N)'R&)b`lIbcNdװƈ5mBiߦR-3V`1j և`9n5(&kH |7y2D^̎qD e2m8>#8scHwM+cvcPh%Fc@kMH # .=r0갖ip)Ջ1MymoS{psx,^=Aɷ?7HQJ|QWlɤZ&4L3=]<$^SȸkcX`fژ-F{$=j|JZc(m,YĈc0uSH+H!*A{ü…"w:v @ S8:>J 8obÙ!əp p*86ac(mI#P*;0!0r a110:q<,&_#ْ hc>Vt#1acgS ]ZT\ՖlJ`}V3&9vX`[ccYz3R K&k}.%ĔB(:0ND\QZ jdF?ôϭn-s(ZyLCc?{6 !C(!\vodv2A.>.%'`V5)%G4-S Eͯu2RS#JkifƻZGDXGr ̪H)K+фe.$͌b&6͈k֑ZNGGBJ*e:kYA.q%DfL\* j3 ޯ"T(MPa2FS '?@\3a>%YufU H6x{ z3r Pɽ)Zi!Ta!\M"֒UBR~*VH+8o* DT Obڎ3b2 L :SI ,Jy8'0lƝi6`c/b9LylNԂk^A`x x$JbJ%eYJ!cGUdaF&$i .ZxC%y;m0g#,Dx,b2߮&(P"Jב D*89wK3,'̰Xxtfv7Xn'h !nFpӪS 𖹁퉖D㭥I@Pء}A%<]Nӭ$Cg؈ ,!5:$&Vdc-qI4q(0IZ]E9NWIDs)Ҭ:ݘ$fȕK8'J9K(fF5*:t2h&j4U:Ɯ4D'+í I* RaWUNC8*H-@ٖz0Lھ?Z mװnK+ێA{}s@"w!ş:Xh%ȞM(kٶwQa]Pe[`7PRsG]j]8?͐ھ*)v5ԁɏa`=a]ԂIizOUʄ~tt@Ȑ׾RKSX4*w9D3̼, oAbO<kڼNy$_Op8Oph1#j1IsL_0m ?qVVUl;7\QuZ-11 ,=O6&hQ77L2🍽4Ce0%GpI-u[n9ZRC!۳'׭)[5dO"Xnz5V`_떒PjZVn ״ [z 窲9s%Q0@D@|ٹ,;T(uIo<8Z"ݕcHgE; G3In;2 ]pM,Ս2l(q^MsPeF4c`qx $W4 PDhCTz,'OscO0yd?[Pa-qĝOg%-y҈QL>4x/oM$PlHƗF"Qt^b? *5oc E͓CG%}S,XYjwV}{Ɠ9IA1Um8؜ b6eӦ|vښﰟ}SH{Mw2nLv0*OP\mVuRs]E/'V&95"1JݬEů}%8N슦wŚ`6\o*/7H. /g4 iz$ {}=xop•spSϿ_ [Y[T?7c}3ZKU}-}=m(ܿ[3lsOiPT>eO)>zPD޾m3CM3` GrOϽ_Zk\z*Z>JA<,uO\μ.;{"zmͰxx#EeIw|ܼDw$ESA{Ϭb&y0Wӫ[-.V &,MeZh Y9Pqj`a h]DÖ-cR+EI8Z%ӝQura9R/W8W>p9b1ݖ^ |ԝ*m[ck4DAYkRYtƐG_`z]=z:lϝsLaLe}@:Msa+TBV[ܨE96h̃7^ 3P3|>[$mJ_K&4L_"yHl95NެRJ6Thna&zg&+q@[68\xt|bj 弮LfV6isNGϡSh&+cUI`B"v֒o/VC *pȯFkk]j/ 7 >Ĩ5&E/*'蛝^voٱz8[ȝ'J>JNѰ>20{V4b/ҕ~G A{̞ P;a~^1Q0q_OwAT\?,A ]ڐ&İJ/&=z %nH(aTO/Q /kRiYyxda:?}\p3t؎z޵_ZdA+()*}*Ϋ$P}|j4/E ELqqc(MScuF4ӂoIlmIy$3*2:u,Mǃybgwk3o 9(#`´rʹ繿\gKA4PoLxcdeR~a+*tW\g NzG .JVt|˭ y_,,k x0}u5oEgo z Œd1GmkbWU[\F3N9ȽJ9UzMoZ Y%rnj.p~9C>߿NzΓ[&00 S'&``V5גhMVb#'^ʲa{_:Z=}߸@qw)9h=jڣh=jڣhA7T??2)8L! ,50, %LoR1,k0]K>(邥FȸJTh?" ԦZs&)(#`T &eQ-ڒYh,vkd#轟s3GM%! }&z(?hsH={xf](.| ~!b?b]1c6Qd)O'E d(\x˯0sJ#VodaPMӐ$& ePp2t<1$:6J{R>z1'Ad\7I.-yt1BДXP6 (2a(Ȓ,ae\kYZ֙8t&_8 cѓ.7sOȒۯdc=/ _<bjV-#7SK*RT1Yѷf)2A2N&Y0.C.dqc6<|ޕ7}F Dž{~sNZ "(09<3C>VkiPF6AY?'i~Ok1z|Ł2j<=VGeAfO Y8WfI$ V3$2H!l`V=X6rh{'ooXrt$Oh`W&)ـG ȏL,1>ɜt~r.*g=d K pyrψ}[Լ0C]n2`7qHH^ަ-鰞3bGPWADF Θіx'L]X,~(]ngY 8 jC׀-w! w6=ǚlVYsϊ ccv>6 cHh3oGdv B N([5KӞqOQbvbEZOy 'i x/,}c'4p1d 3>׵7,0x;9+x}Z9 qEut{'-pIZg5` t͋paK_ZFBUE]ĆЉ \0 sH)uG '\{41ɏho}F;c3>j4 πfΨ،ܷ5(A`P%H%,C0I:!˷vLG&#_R=~v?̀NcFO:1eP;LվAg촐{m- I(t$HS6}29L^<۹BxФ.D\HjB<q8+ efS8: HGh[= ,n6R0*@fl5uLXDhXHNP\`F,gm?;[z >`*5'F EN|S17I\!"Q#ӱ?<=q9D^qTS0K`|ZsJMW(L=4r>=|qT'AU5ho쵋zqd9G8,W<4<ij,ШQj[@ATv?Ojaͺ^-z.ӨYae p5r$\?h"a9nI>M[- 8mg[.ZҤQ=] BdlZy#)8ସB{JNT-a<?kYUoh.Ix4$&MG(Q0 {Łdcb#>}]3ʒ+ШٰsRTc-^XNxl(ik,փFR;xUaOĚ>?Q(ϋ|3ܥ@t1Ǵ1{˃$ڃ-R$q a!(%WQWzTH 7N8+um~jvkc} B K$`6e WZSR&8 hkp S%t;a٦j9ÉG $:oWG~I9?YRB 4*0)n6Y,w/Ke &bVBYmS: Ԃ'6EdIʧ @ !p{(,ro@MJT:KXj<'"8i#Z 4,>v¦hV,ԄP l0kNpy 9dcWL,[o(Lr6 }A 4*~>TjBF6O*MCr+c,fk}|U p+P#Y|Vw#lL@栫֮! +CC* ϠS?À+(Ш0v(qF]a)=Si1}tkcwqmp)}F}Kd@, uK:{m4n.7Rs.ڃz$r)D\pZhaqV r't8^"P'S Z4y sR4[ {Yf4,/_7'ڦ,7Uy?xbFhn\}c˜h"lq15D'ϭkR&':`VKQ#,D]oTk '9ha{k'.Wd%88M) 5L%8,GdjC8|ӟ*٫FxSFC& 0@)+U=Gz7dHl"g9iBۦxxҚ\J Gb_;ajK).245:o){j[:I3h#&my\e]ǎ՘+. fvϛVWL_>`?UURv,Ǘj8־6GU`/!p7/wy\<.X&OсqE`ׁy77),7cLo=07 4*n݄wY&7d@bo8XiOk#c]eXd+'X%霦ΘA7MO>0:Ϙ'o%*|w٢d^u0~'apIog\YLUOvϗj@kNTcji˥2Ք\T7^&!o;f|] vwY2oȭ3鸰4;6'b,¡Z _98E(,U>z'5p.{e]I*ţ!1E$O펶0&JP)@l8- |!2\S-IFI3ӆf'TsY3X<>p )H./È*W0 hT[,N`,\mäW99oY׫C ա(ur!/=1w}`3>RwHEyvod|9GQEtsuUs *j~xm@#Lj[L,>tJd=9;s1#YtbhpƓ}[.W[2Y<|ym߮$A@pA7 ߺ0Ϻ`Dzz~-oVfYτl*L-4m`tQ#>H~x!2_8AW*'5&jچC̵JPb%Gub‰pSc/\Fu̹F9-C{z!&l?DSpܥpG̼\*B(yZ@͘3h\|`Kgi3?sZnjCOoufJ.eHT szٙUmلS\Ԯ6u yjgJyz/5{1;.>SŔ / feLBͶ(ШvRm6m ZhTe91_^hm}PUzBc=g|f|7/=LQ 4,w~B-Lt|wkt+җVO} ~{d ڜȤL!L&Rs@em`o I$z֚c)A \t9jwcQ5kG5%7%51R @џytn\ PW)أXoJ],k2S(k_CIrOq ¥8kqշ=vʊx8Q&@}כߛ>R'Y{+F;`$E\mdrngD[i3dCmɺ]1K,,VѣҔgw3>/H"scUX1 `>.HFdk:f3~.me40]TL[CfJLJWF2'#cEa3XVƠuL{z(3c>/͜.}CƬه=ƚ'Cf AHt-8,7BrRt(ӠhUnxX)Qw+QG|"TسJBǢ`ݮP%pj[^;6 pmyp "wɂ`nda2HRzbc645Uus;BuL1He@TRY(WUPQW#E dvOʒ=\#J#l0(7;JT:MI!TB0nz G}y?z4`b4#zK.hj h{0@Wv%=2)Xit]ҙrp{Ud0pvA0c$=fy%vk^4ѤG@tZ%y~D1▷MWj1*qG[ZA*<)HttqEԣOU.U=B`ՆuaF݅Gq.'lQi_8!gELDzFՍFp\'wuhJ:[p hXΧ]!f>Jjn,}A8ɂ ؕ,#{*?{\̗F QY- ˿{$dA⥙YOu鞾^9PhLLמF7Ï8WȉwPM[p!bv?N^:U?Lj(c SMQyS@r.-u*BFbu} TA"Ƹmm L`,#I!x" Ѿ4* <SoX6@M%~.pT2J-2+p u9a&.N"\8*6 SjS@KHODZE|XxzbV=׮b9Pы-8FVq1%{*?$* ᕾ#JP1LTأqwuQ}Sj[?zPhrNJXFU*CR>fN$՘ -ս)]Q8r?#ʽv%O"%D$;$t*`쌽9 2_>lqeDJ'Dʻ`c4YZ]$GdzcEy_.$g":\GT]]Gr@wxh4 >syh#:X`3jXK*=(F."́w6[2%1I$cs~&w!.p۫n<^'^-HءJHet;s23L)ǒ88` Ir9ͅ[e(EOBz \}[vq2fnZ(NЁ|j{4"!Q$61'4qP0Is5Eaˉ)It&Œ^QzSH̺Y_H! 0"p,&`pDl@2iiA XC8L|TSxNZHB0j-Bi\E 9r}q|Щ;|<Աaݬo]"|](X'o>OliFޭ{)чV;y.2 hI !&"P>0ZiBn G9Ǝ VUQڹlaboftG[sCw4aB-2V0qimvN`K>/wʞ %Q-8z^atq┡'yO8+ć0ց @/r&&ccnxXTΧµ_JcMStGcjEJXB@xH$4Qqw |wYSTYƸ cX5P2_5 D CJ,,"ҘHi*]QIV8z&Nt>qViy55d;3Wj]`wP~ِ⍱ vq@E@*?+L}m$Kh`SI'E(P9qi P01Ȩ4=Go$0yyKR t2{L.tmTƠR@)pRZ}yjۂg℗^ TFs$'p^iI<%-8z&'C'Ȅ8@I)&Ҹ8kQE=kyii %0[QL \nUm6j jPWI%F)zTb\M\=SmL14ӡZ?6/Q@a8[.mSG{<.L {BGBB:⫬ Hxk[RwVQ uT* -*ul}3YQ|[Y1 3r4p_cX$\l`K|V<,a_Lh\r8+C.M #&JR0E^Vcov0#LCn YB2bbBlw9:^o<csKi\A^íQ̳ɦ>˝O`0=0''5<^cjMKKTp t[yp P)l˖|zZ ZQ7[ z1~q*@L\̇>"\D!4&%H 6zn붲k+bfP3#΃ LTX^VCS)\CrֿDm\c^Z;Wૹj4r ȠGuRd0#i45y~ifyq3 ('_wLsWؙJ`$5%+'ObN޺>֓76/j(N%u? ϞCk^//)c1>=/^VmMaX|;OYG~o  %u@&H m`0t2)FBS@H_Sy'6Άl TnͲ6wEx4{7)zl^Lb7hrfabo&ȥwy`)e}śf38f5Tsz3X+ZoSVj1_4Q1}r5Ji(+6Wbnzgޭ?d?ͦ׃ZigY/՚\ܖM㪿oqi-w)}\jXy{IRp>}/>[jBfff{@0O`KgNV~bSjћs{s2z^ yy^VE2~(J0 O;Fs?#zR$6[Qg`tߵgibc*L,`(-7X1nTJ΋ྴ߱5`֗7k`y *WI m(Of?KtD.\Z:ZBowp,Itnzشso j{m\$9닱b?Hgi\CQ0_%ybSFbABbMZ0JnqV"yS1Xk7dp&zTɺ6k]v Q[<>n":F}*6 F*H4-"p<1L15IfYE džsVL*S\ 64O6+b>͒`m:8F 5yUŀ1?݊zOVfɫ釡7 OUtHtNab̏zI G- zps4aH(XlqUzw8qc𱈰d&IJPppɟ(o'h8WSEt@ e")}@toUdVd<{c0s; E/$Փ Vkhjo3D"\]1xU< :!XEFւ) 0%L -ɁcO}60XC N̎5wr_ =.p ne=6-3ۉ.瞷G\7޻*MhxN`hD{Ga \C&3* fq2RFN>074Wz#oaF*lb ,4#˿7򉛕iSo&ӣ`b1)n7rM w77U.ӻ2mRMO{ ˧ۖoah'& B_"qVHE@DpF@Dh#N30#o"14ZevO.W>/v_ڝJH6d7]ஹ&^h4 5d  Wfe5ęǂ,TdFʠ01 ^ݷ1 ZjIVb}YG.fIgɌ >OqY0/(t _5[iԃo^])j52 8pJE={S xutVC7Qj G=3nې)n> ߘn RN`յʍV_MC5%뼔:S'm܉ɲ+΁K~a?T [P>BD bNb!c@-g֓TEo͞q]1y5۶oxBWo?|q(Z-?n3zWBI34y Mݝv5 ^7 3v+(6<|u+%8m̗搂 M"<BsC)dd4Y~Ns%1arw+˯@nbO0-if& :;W祌Zkl0)h\R`@J3./!':XSP0{Gd|ςٞң{{8ٺ\g2{:m[)-u vΧI^b$8|:Z˽JK'ɽ)E*Ccu2ϲHsM1[8i)gNݸgo`6싰/ucͷ}D.U{PY ڽω%b:z'N-ĬVSQX)Ʀ\X/1^rNɅjUsv%VNTqA,51{ l &j4ToX㨩LBa"T*g[BWO>OPWlwrN%p=\} .ZHAu xkCinjeJxʸ“@'R;JLd6*uZ%x'NqmF-BZ֩,pbcfN[pf +/f^lXAo>|V?0@b{6ޒZj.8`;pPkopDYOLM+eX)ƚ*U}D;aRQ;=l=)VOw֒Z_.(宖r.h] ^>w2꽶 XE6V9w5ow>(|˙Շ_C.s]+\Μ"~ L}Qʰ]JuJ_|q ,hw|~vI2LUf26o+`N_.ugg\ =e9eפNC8toQ)Y&kmO 9(~(+S6;|zدKcs!J";k~4/iJA}@H1m $Ti=6B:q.%!l#b\p>54+&d:\#欵1 m=lsff@d>^lGl}L@zaiPK+qE{`i攭5Z$)/OxdضB! ݨHCnC9#̭RzYUyED_:lx[>>@Lǐ}'@Y*|9^xłҚտ("6q:[k:60 xͮ? w'#r]}>1FpDy椺uN39hF},HFJŠqo2=,0r(J f5Q;t'ҪZqhS I 9D$=IXt8aV>NFk1 '8nu+7W<\UԤ(ZԮ_[LJPO< yC|JiKDǏÏj/4c+]M1Tz4y+oͯ%h_Kj>qq+f= /~2pc*$ECHP7Ǟ{xas>(ŬZ},./??=_v<%?.…lSЌeYdɘ ,KecG0)Y,"Yh-XKqG9=/I)-Ajǧ5 3Kr}ܦc@/gOƝF9/.+E[yj4Ī ˸^]Y]rVyFH  *8!JL^7gb>IKMeR݈FJX5Is'JGV%ƈܼ?i}^9T4]tvˤs?R{/b 8f=ysm|{yzWɲD[XAHweXc\NE1V$#}s{,.uޔ{?5>~j.{I?-d]$yŔiH6^%w zL2,J0fn D:),"=;=0rnnA$z#4A5 @D%B( Wkj|!Qa%LC4!H-I`u2):2 TAo|Ssk5OnY9_ޑ;(+b^E01: 08X Mo€PԇmܰܣԊ$L;{r`మǗiQugV€Ql膄EY"X=), y'UBY` f!+t8 tP;wR;W4? +uFe=,.2rvt3Pr2I 8]NI9倌0~~h:%*%AH4)dԆ+ZQPYBwRFYp_ۘn1%",0DPvnܽ;b xzYthuؤ7e$f!x31Q=M ocQo)+U~YYA&`3p42{"<;>+)Rզm)ّU/P?giNa\U>,BRn"<+kFRQ&GnjK3(Vo 2B@硫y2ʂC(<,+"sFz?V0tr ~yMM3OcoeQD>L/; Y!K;8(R1Q˾K&p0 GRFYpG!ΨkwjB "!MQ@ Gx$9j8p"w B Y SgxZq!#<z{%h]%rcXK0]Eֹw;ؕS'Ŏѳ %G sΛ}sfͺGO@aXIL}-W3}Y?=|2){^H_("k: 08xf>TOStd8Te-cJ,]B~5L=v6o)0 (&t֞!V*ؙ;-?_}`n1愅P p8 sKF7Dg2gWs/(8) 6jݙ. C@a5Y a!+s/}@FapϬO)kYcNXsYyC2 %5H9MƝc; ԆPqN9fž>w 3+=C18u !': 08=xvFλcd: 08=k9u҄;.[1GҺ>܊;u8JCjNG^on\EJҟ V~Fwmlzl~ͯב$[b`ԃdʯ[ӸU|_~iݢ=o ~h܍}! F- [YcI"!tzEɏ%G&z>I}y_~n̼A3Rv 4=x棄>9Mw}Ws3;7DD?j$o×~L갾|AqFcbTR.FVk`=YFO\\&Oڱs[XT6a];> C}|&PPpR.߳3&]cB)7j<[֑aYqGkZ )$ߧ߮,U>`gۦ}Z zpw ES3I?6sPy%pf2 ={5]3,ja7֕Rُ\"*b#4"t49/Вs(gw7թ sq^<ks `Ա_ؓpy|u ȂG#W &RVOv{`fvVR1|%gS$0O`f ] S; ^se[cR;ĬW逰 ߜ o6іdM'sxr#ccy$HG>$U-=3g&*lâ}LԽa`,ѣr;%dR{hѿK&tF뷆= Y=?Vv >Vޤ7iMZzµ,dz*aW2 C,!-J+ %a80 Z$0Gz vHH*^*b#(_s 8عٴ2[~(cb4qʕ>`#>XAzc +LvXE}p?h8ٰ_֓+#ϷYFy_5zE]Cg;qR.}뀧徾a0ÄT̀a\RiDBpJO11$9^j)<(!Yk8 @0ŢrE؄ A2 1jvl,<2ukAzBߖ-vl[gYG2.)Ը-- Jz:u;K2nE}K^hғu5uⰕ[ָ[IַP?\/?k_Z1zП󃎝Y A_>Ob.FL=u-S'.)|FH:^jGn< _EhKF N+ +0G$1[tv)3SNޟ hYjyo/̰0! H ˥CRdqw[]p7C,=K5v҆][em}yHyss{yœJy3\S rU3+wS}EZ:GOih>e]8 mai4@pmr^y5$*klo.i %zX8,ەDrÙߐÙՃ.p`+2r8x-+޶xz^^-;g m=ե-7枮uCv1M[X-h\'{ \ ;d{uA^*6s̝qskzfq8q{8ׇqQFۺ~o@]xK/\b./|q`qDUu޶!CNnu׋]3ko5ỤkCR0ܧ_-ՕK=ȗ&Z@K^_z>*^ f |1&I ד,ɛ&QemT`T"qlG*߶3]m ډ^24Y4+QA2"+9̭fk Yg0YfH.é˹k/g.ʭ!n4&QhBI à0^~F,d7M/<) #̲VÄ2 7`3;YL,Ki0ye< r*3IG >:БyqkmY/1lM{zz6Ch{ݮ%wVFu$5shİAL.xu8 k)WYtr~";;[TRO3L ƙG1wN+mB.Ԥwje:/:.Bj`G-x:wf}yt`ͦ|·@>-{Ur?B8A5JzGs?p;?.zr^s_br0ǥ1C#6CF 0o5.Di_t."t,K\\:9m-[n" VYhA3Si&K, ɐrR{KG׎:IGV֕rzKz\ l3Poa˜^wՋ)X5t|*# /y}4ϕ 6T9TXc gZ/?uܨ4歼<V+Tn|`R[Y컫L9>%@;gO0sE_ן$gtP][sj9J~{Gc^Շ?5 -͝( ]pEW\AРLC_wuܶfş-`|9'Z7գnV/d"Uej$y tURk&r>G8Ծpv\O]uu!2f1Vu}ms=j *\ܧXc򬆵T_^" WղqJhBsS*湻Yc9a^E̒H'ɞ骷#}fzZ4l I۵k6v0pJWj"CnSngD$-zZ47Uk2\n=;?[=ZىN+ı$wf8xe&jJEB iG¹Å6̈́c5ndj'fowX-^]F(܎ `?C-g]):C .0&BaKnaa ki;;mw 7˳d{gWY'BZ[KV+)yM٥7䭟#@@Iޤ (FL@`ECy潒6 ߧj?H=:DYMnWm$[!t>͆ɺeBj a9Yӏ`{0!A #cD"0ўe8?xږ`$17D(YZ3Qֵ`ufwB}:9!pm7#"[ݒqڎ֧\[;kt m3~r4&JB Ks(t<_-)m)#Ss@ dt 9/o[ 0KׇL U5rw&_ \h_)>[9SP`Q/kC*YѹɏJ_D'MiRk&NhV W ޽^c1ybo>8^%PClS׹rx>S0A+1m]o`>6Oo˿>7qjKݻf~Y߫l3o @kI%j,;RL!)R +C&Ҕ$ZjR()j&YOFXX) ^hZJe2QF[yP]ppP%-;$HQ0`Xʑ:=vͶމU(p\ !rN21ә32/gg[U֭m[/om,Hp,*I` |wʵsԃn ͆Ov#Gfdg$bf )򜃇Y)3`ZE)DcX i[f`-dHAFqFnol?9K}V]yjf&޳a}۶ ܝ[Hi%C%UY^0&)VYK gYf K-:$ Jy<d߅v8p_ jO_~RWoQ<]x6}m|؝@ǰ7 Vpeho] X.l늀|#: .D19&jc Tr9R 1 kT~^&B"Zr:uA{f+9;Ng;o#  ')fqfa!Bgdug8"@f*> `vSW: |kikGxYv:(?#2'tHm eMg;w@aEh^Jn+kƔ:0<}bLB},t B},t BtW,t GcX>cX>cX>cX>z9},tOebX>H )95c0wAkpX>֠5c  qhXK+v>9#[Aqtp m0k{k^ڠ[fX(%*px4օp8%bJ,Ü"4 2 6[fLh Cqe,IEE%gi1j H4^${6X@u>9twv왁}k=俕=ݽl# '=Ltz0QĹmWWQkz${ $fvT U4iA%u1OH"K-q2/"7*I@};rOj] y>Tbq0u3sk@ԅ\R?;gOQUjl*0mS4J5VR Cjh^_|t\YrrYw~{Gc^Շ?.8kM/a 6,8o=pГ7|8O)i2G*W2pJx\3Fg6Brn#Xmmi[{pUsPtK yVT_^N}92w@]K=w>7>6-y6a7N:͒G={>龞&+n9^&.*)?քB+fϕx_Jt_욾K.܎yeǯ$9`d9l'|;+މdږ^J,ۚRxʆw@! խ<]=m6 m-%#8@j]gɸ[tqlsVDdkhS<$h^fyۉ=,wrU-^]LF^;RjOkwNV|A_Zz]I~u!nVT%C8KD 9XoTJ 07һ[wبMN*y@ UCZk@ ^cIbB{L.-R6q|\Ym&+lQ%? X*&)L$ g:֪Zst3zQ->Z4:h>#F)NoC l\b&xeq0 紏Zi2W k`dWƆT>=JdU\~}8B` װh҃3XW33*񧥊ȅwU'l}Z7u# v0ꌋKקyqb݁w&_ 6C]TeVXZ/PK|V%uXycE"1iBO%N]OG:$?jBMq땊1W, :`98*l:4%4zb"L;cߖ^!Q{@zÅW&W'A%C=т<_ 9HtNb0] aѢZL(5)&@Bn8.#?iCe2=s~zAFkFi+h#DRF@heeQRII%u/x)Dk;dԜ%pɈb֖#DŽoq<eN</ՂnzsAXHhlMrhg dyF/в@UcHhhmxC.8\20 \˅?@B[-q&eu( ZŔFmAo2Mt5*$_FBkV@MIe#%791kɉQ6*{Hh[ELB3p2Jk~e7C#>eɔ2za=|x୴՞|UUF$2thg` u/ o>Ĉ}Hh^r|I,Cy%NcgɑW+"uֳ0O65S3٭9Ч9?DB ]. hXL䷂LPYOIy?@B ox.-GIT[jD}&"xWd#)-aɔ"Fw}Hh^}AqfD hb=O^ZbB4=n -[m[A+!C BFQC/yaV$hR "x'ֲ -Mɜy^2fӦw7DBDqr{ +!Dka;" xZ~$&VW%XX6#`!Z'r·"]qÒDW,P`{!Zҵ󑫜D 9"\CrHhSo >=w5-A]N"ž9?DBNN6E. kikhil=s~\R,Vnj8d4ψ xTk qEɆ-9b8{!ZB69ǩAIC+Youg!w|ϫᆪwȵo>n'/h6W"[vw>}jN3~V{'}(1 S5zA&9TJQƁK ,&'oɻdo߈CD9 ~ظ&0_k݌TiD2AXȘ(lY# ΃N<;G h(AB1+Dڵz;T.C-(I/"StLj|"dk֮+n׻E|jT]SGb|9jn@ǫZ/|kKgtw_庁qm䗟L>Y7k,S qT.~GL$-rx;Hֽy-G’#P7\(.{Sodx|N>/i<='-l[@ZU Gs;Dnl]-EBO4nnOjP- v5?Li]wHD^^F2$/O]kw#n"vEv|F- L˧'#mc.?,?xN̶tXu鬜LXr7cI4~6_$^Z2%-jFl ̺I[h(V+~6;t=6{9UoVw`5jcWy@#a$EWj'geRcq$&gyGt¦ד8=@gZ}y?ûW߿y'}^{+8\]Wޣw럇7 Uu€[^vMK;qK7C9Ӹ2R?^wyb^S8X͛ɓt+m槫q)_oJHaot/.j[8ZGv++ߵ'Uqpu]==61ȉ=3 !U@?s~ybj[鱡%`}97 i>!IDŽ"Ǵ|Iփ'% !hsft!12|9V ebA[ ;xr B] TT RYUa\H*"] x Y#E,)4)xּ̪(ʻH8ymXRP*2o䔃FF j\kƅb/3 򻇳vm-b@;;܁ iBSӭn[Djn[ RH[ G[VBYVjn%iQ iQX*DA&s< 6싟t`?r sR k,l^p2 VႪ6V`JqTs24u(X6w9lVJX+_"D F VٺMךqK~*=gW mz qD\R&]G{*ۅ@u:쬶޸EzqC4ghV>}!V=uB#k}݂K\?(KK}=ʦfџ vfDsQ^QVuGCcb_[0a PeMdaUzfggOv59{FUЫ(A?z]qē Rw~`%Tݬl]NR`ˬ'nO 8o_FTCn?ԟv߾|/wwJ8]R:t* xя\ [,ĺ=L$3(Φ$",7wy4 v5^K)I4݀ X؉  5QBU SY;iX{I8z݂WɡpM\8#[ Jy⮟ܿyq&IA4ka'HAM)w]RTHœ"Bqy%a/L&_26}͞B vf bԸ1Ӯ!YF8f?zٵ/v `|jp tvTƠpB!i*Amr0U_^~tgFfeͲ`VI՚$Z0&;/˩۴]H&EiIEWC6dZZRoZ"qـc%Ou6Bkx[܇v;]yҘvM`cgiwdAFlr;*Qq(i *. #}"uB͔V =΂t,jWf0^6@e|ryO*vYvo.je{,ߊwUn^[6JׅLzhq&6ڲ`٫ls|zՅS}oW칰| /Tk6Pƫ?{;^g-y͍[(dh<5;/Y$(GdY~m/w[q|/HҾh݋(JDsJY[MG/nZC۰+bu$)Xx}zx;7^V|d_J1[48BwENZO{vW.tOhEf Zb`F6=zs5n ՛KPj󋈓PrE[4PʔQ `Дq V![.j\]P;"'(Z 3 w`v{ L|py,eO?򝎳/F}X?J%B Q"s Z0-s~Ӟ\a w9O?t]X ʄdq9ZSfa\q1! +ut=HܮhH-BXF a/e!\^0.)8-S0 J).|ow/_lς)x5N0G?ޕ? SpBmO0FOaO?ŧi+.PSz_e>@/az?>aʟ^!B*KlŐ P5c])Dpȅ&1~u]3 ;ҝ@^t@w *85iatH*~PnWaY8WI1~M"H'ɽb|%d9$޳`k 9K]BzGP2FM<|<W΋ꅳ:0[3-.̟a1X-˽9hrg~Ц$8G4 8|0ufy6QOB|1ٖiyT:_Q7U"%rqi3>MbԇAl.F׾TbKqٳ7O>g߼x ;؁YYbS^(+8kCpЌ547*`h\uMƕm.yø/׌hBDyϧYBؗ8ƛpԚr栓|6EiW}E?Wf VKseEγD%6e|1#zqb}Ov2Y4+PN,g0^$J )4 qB6e^'P]k1&],TXx1+AE-,َ:EnDpa k{K en8[ΙՅ.Bz$PXKV.u~v9ޓ}7FO\q4nTKv[[ƒ6کnWזaG[DB-DFyc`ʢC\\%~I!J:L-D%Qc/;L ;dPD4UJx#E4R:}囈1!J]k@5ڃRPɅ-s#'`%y*sйa9$ ɅJm.ǨȽW q`;-Yp+n|Npˠ x4/'U4 A#_{j!ZʨVʰsnB|BlZd vIg`493fP+r2D1^Yns/xoR RYUa\IYb!]>G(YShsSWy ~HxymXRP*2o䔃FF*# T29#:cdKm-AxO@,Uwە+v(oSQT(\Bqû)Rq:ɩHE"{ei{ DDM^)U&HuO!RktwjK@4hƕpWU*#->ીjZ l%еM]ue=!-.foپIX|wL>Н4ظIV89&r6>G6tâ`XZ) bѢzMU)tD~3Jl\օwJԇ-oW ͵_Z`>mB.eiY@+ ? k)ṎTq^ewqVԟN/}$U_ZHo<=THsX֖z[ۧϞd O]; pz4D BJ>}Gѡ;ovY2k^5'x!t9d^ǙP3̸W5i0iKN~2R't/zaj{ne-,8 ퟟ20]w,,Jd+늭"sxy\(5aξi볏0rS*0*+(\ 0ԭ [+"̻+R}Ӷ un(n|6]V(^ۂݩx F|r8`b1&lu؁ҨߞPbZѓ#O\/=Oӛ{YcCU @NXêuc79yPs# =D!|8f9X`Ugp #i]!%cXP$L2RStJN3)e:L29StJN))e:2R',Y.EJNq)e:Ld2RStT@*)e:ᨔ2RJ))e:L'u5L8b^\eL[ t?B/q֧Og3/dy R:ߊX}. <ޫ[d[0x\h3-XG2$PQOQ4p0ƣM$ |Fn2B\' EHN#D1 hc3zԿAQK^;Lj-풝bj[0a WebRbLp.ggO,jjs1Up@E96D% WC8Yhf6|fv9NKE#^2#.o!z asi7{r ~\mCǁ42aSEk+EST"UH/RT⫶`'y}6YCFM:)><}'xOIeNQ):}ߕ-;E:່1*}74>}+dn(lPDP)0pm@qԛ)=-cc\w>ćp'/<)^a[ßB#zDI'4'Hbr5q#&%Sr$ w&/k<6A BTx EReNs/@)( KQbu|m?6RXp&B@wclܕ5e& '#KҡRqA>n NTB+0.e@YRqI1lq@̘@PHdjUյ|Q5Kj̠u٧gVoEn\=6 +nϾ5jזT<+*SqS;9~* /!Yb1!0i3N:]_#9[J$ckV"aHkZ7RQ2Q}k4 ]WϦ@ -6`M@ZX.*fW*g{`կ~cy6? Ձ8lh$Hb{7j/Bqo^(ǜJ]G== wM? y(d"Fp6( >xԨrMtN|;|?kL/>\F`=X ۳V O=G؜4#" b Hs^Wބek|S2q3 _}3*a5/a\M+?qi5!#Ā+N}Wm0W5a+L4(9]7Gヴjj8dqbm~ kDp2fr5 CyT3Q:@ixS 83 Stjt˓+ :7"ΰAzAr 'pK]=EQ IӢNuFa<n;ZhؙN<(4e^իex❟{O6K;ozø8Aa+v97]NSix`ȵQ*g er+g i6[ՁP  gG<F (0%!^pogB(+V#xiZk K- nqހ 8&}^/&|`܆zj`~QN[8$FxqF=J{׻;B5WH@p -%& 478pKʂq"di"M$R3 E(y!z;d{&C<mX{))qJ79 Q6nY;cgspse ~vP:yL ~ fs/ӌȗbͫF@ Rz5-2pd`LP=ǚI jlQ1#g |;NHBS&% jʆտdpA Yߞ#7!9G!_W)TB Do]l4 (JoBj%ȟןے="rB2_qDy0fߟeFwtow]o҅t] }ކ5W^aw*Qvc=\1 G.#`m8?_q20+l`,?b}Vu+X".宔zy_|awo~,6Z#1zZ.m#=S ~t?rn[3o9W9_䥩֛螻>]\qFoE0ݳ&g}7xe+nRvQIky{%\+ȟ܊8VBw!~ _j~|'u iaϫ[rzf8* xH㱂 &q<De:>ٜY◟d#x_֠~M>:,Y'.'H>$(z0x0s: P-UWR7N1_V X먬e1AZO+dhX/; pJla)#}_b|2P tywk[?FČ wFՖXF5YU^qw˛ vM@rZĪ6߰}i#Zϣ;>v6fm wl$lCJ'{^! !otqyv^SiiiiiiiiiiiiKݻI7"28]2wJtN̝.;]2wdt钹%sG B=&!;VL>s2lL΄IKrÅ56A)x)y]{n[y7ŕ*{?^|+C0!^UX-zg՘IײbSX ē:rԡ~wЪk`Bdw6M FSQ֔41Ztl 1Ϋ%sx;?oo? uL ›9Wڼu˛v5hyַ7}>IC }fCU~pB TsMkB*`;\WwUUW6-g H `z`>"b<&DCcp&9x^Cc/PfU ?MXL6cv8Vx5Qv{2"Bك󹙌oMO6cYsaV,2H`(1G)4Z<ۃ`ۘx=!#U{pIQq~b=x-3XX[hEGd^T*cF:0?<x ܵ'ܭ +Y #Q,y NZqB'h^O"! K>|j_xԟk<Ŷj? TJ#CTzu m΄` /8Ju΂en(HXG"\JĊ s_>sQFZ-ضYکo+r|Z(V+ʵ0Kp+q@X\.Ș!9YuJ22؆a:66/d&1kCaMIkp; _VŤ gwg.ʻ'jR:o}1L_~9}V*D8~+nZn0{10x0lV0Қo&8]LF6fwBnyv[6v84l62gv9~6̌40X`gc~,yu nKzz$)Y"%%[RRKZVbKެ{6L#(VYE7!j`[#v]ow5|Yn +Vqپ⿹{HeZkg@JWcnKÊ,_nVnpcOmOWk\X?,{u>׶,=jqŧ˭Y`_1}Pd<[_XO"?]KEmKf%T}\&,?=|*bQ+ܑUIJ)FC^\RCGƷZgũBU:/ොB(VB%snޖ IRML> 1& +i|~OjI'nJrʆ),_6~"/eluq'zq"-JvNlvh'f we-Ǽˀ*X"/m3zw׶̈́Ad]>Gd4+$V ؖPEYrbS39Дz J jD*+G$C,6@.$:Z{%_&l/Caz7/>__~zQ[UNR2$J|IHZDDTES'6vmpvH;5E}S6FK9e"'YkXmQ%I3E?{zIesUQOYs_Lƅ7\Rw-L-je뭾OWw_jAdu>J[8!AEX39PҖb)!誴n}CNcX?. A[b̡{ *zOFqDFVݢ6xCr"{"[͝\GJ 9 b5dZ;?W}Yc-sMUSĐlO֟(?-p*\NV~Ҏu28Mmopefd;F2@cp{sjz5:5mӹyTfSէU# <0!ͭ~脓sU:2)yʹ.O/t85UGHk:g&pWr%n>7zL΀rZwIٛPЉGp==5zyl}Gd䶋=~ks[YSǼ(es)oF"_hW\(:񌯓ꝊkڏވKQ̧ƙ7Oy+0Ç*pHTeƩu@#[;_{N6>*  ^?)QC ;z2p!7f _igt7!C{x3†13<}m|êgR&W9d:|WS:a9<,_maoNcXˏF4 ʼn)-fɤS'ޛ(527zNC&nwO\)w{O竓K7n/꿧;@|MtQKxZ{WykܩNwkrڌo;pܷ_=^T>i.>\:Xk(yq26?|~|y_B='6͑t|ΙS/G|pszmueb;wP;' uoN:R:}HeCow\YVjm}}Ϸ(*ף@ n|ӭx Vv4?ǡ[>B+66xayb`{Sᝂ_iab_n`cO<dO&B< ;XwKkWav X"xc}jӍxO'I|E::XEoam Lt$y dcoya|m4~XaّO$)h7#ߪJ]L6 Y4룶wJO޴F_7Mh|ݓtvA6?nq9Z@ q-9NLq,'DG۶H0uŚs=?Y0)&־*!LLbQmFR1iT1?`;^Rگ짹vRզ B-&P8DIMZw Ii9xg;RZכ+Vx'CVT24sQkZѱ(_[cO~~:DrIk}帘&"k$%q9VZIcG!3، $PʦXVeU}h9Ũ+kGk836%2ʺ884dҩhwHzKJ Lw(NT0 +YS04J(ּk94Y`~|m=W:`H"кB?1i/?D;1k-[퐬C'dɐXl+Ĺ.VkysRݣ*kjjlM9D4(%9kFf-\ :Eq*Д^F),ڢh*Y8d.5`;Kuqx1nD9rEb-]xGTH%µ[MQ9M @\A֡٠#. Uth.M+g*)kyIy24g-a)-%lm2< ŕ]b-@P]-c[49O ER ލAO[KnRHQxfdXE 4l`ǝHaGL@V4S H+hNtaf@57._s*H4L[H z a PgEKrHelT uLoM8$<@Ia#vhAMCDb`ήB̕]Ρ80)8Йb]K@T A` B@8NVj \|M:(Dt0GQFxfY9$/Eyeo(_uV*3$+veJsO.!W㽇.뚋} %T0BSPT~9q4JΓQ8PdF  PeHB `|u5fVc囯yk&NLonpmj1/UQ5(NcDSC; B $">$ 9N醽Y ӧ74%#NB&hk`"ﭺFj‘r,(}l9"K'H-GBB˥wS@yE nZXq _ ,=JHFډrTc@Ak^@!%v|Q" bdik9=(@O"}P8$ W}ܴ634I̔ZeJ!2A~\ w`DlpZ~@X00n**wUPB@Vt"BhͥVx1}񴰺,Y;ʁ52fc޺ R)MKz+"M ho(ݭEQАI@*}'ݫĐZ6-Ѧia@=򼕛lY^\M^]mεD!P!d0uqPqt&}s0zTkpp)q` >R|wT9 , l\kA#y(Y-:`4hDp31-Dy!B֐ŒТ E>D6@C*CgRl&C` $Us&듵OHהӚQǎAV!Qcx"%V4N>8i#B* X):T@-$0E zȕy!?MH( AOr|d8NC֨6F"GXe(T ̯O . :^P%LUjԇ." sʨI9YrM`$ 9tŢE+9"Z 4rˠR;ˏp]}v!w\u)Xo P ǸRvq7]\\ ,D9KNHe 2GnNǝ-fɠK/M/3i(:JP]Yx^>lv,m?]֫ݍvldδq#[ib}$@2H0j,)lww>HEi v,RSuާ2 7rBiV)D ]7JL$U|̅{~!~̯MpOesT>p[GWwyT|_Q}ۣ'+B7ʰ;t[^ ,~r+&Q›RrY*gᐳJb31zn _Ui|f THro]oKpi\p #?^@Cg8e<-rIh0:&aPoB!-lHo\x@Bd ovXݟ'[ 'sΐ'< Oy@'< Oy@'< Oy@'< Oy@'< Oy@'< Oy@_- t?/%RCD%>l- * MX_ۃCz-?G_v#tR{+}Aȇa_ HR1:!SD>BitIgP `;-nHnO^Jt2SaM:K ؁ |sZҢ| }T@cɞ'ş^fܰTINY4VR9UPgΙ%;ʂ* oCF@JDX,B+s %e=޻$ΘkO7imZ`4~٠ȏWKa[w&wP2ju$S~TtϘsJD2SE_Ԟo(oca~yX~?Byn]!ß?Ÿ޻$‡ɳ hMsտ=[?73ƜtWEZ/3\/oN" >?|]_hp ru%CK*_2,(1n |VJ1-ֲp_i0A䯼(3D^ Eqa==EXA8A0$n >S& cm:\>_U #-?Fǂ55g*72_O;Ɯ1;{5 "yc2ca>(JWsrg`Xj"wib_fK_B򗐿%/! K_B򗐿%/! K_B򗐿%/! K_B򗐿%/! K_B򗐿%/! K_B򗐿%/!i JɃ%yrK/FQM4 c Pn/kyCRݾzG$\dX/cs4^#.JZEM<1=:<|殂B/(l"&sϦ,SE LLz<1,GҎz$hȤf8:#jQR&wUrӫ9&lHo7i@fߓ<ȜzN-@VIRmkZhWeNLgShJ[tzj*r_&!RlH*%)kwW(Ai*$Y6Tr5 wM~7>w?'y{ϳ٭TSjݼ3xlgpar瓑fxSI ``?Zn2a+IF/fiAb͓-*4U۳&A 'A%[/SӅz Tt{+iÊ9 WO9bFk6>'Pyf: Lz0g#lOD셨[Bq.BHM)2>'köi# E(_kyl< hlJ4imZ`Z)B%U;&ǛJCJZ@ 2VOh)ujucD?L2OB&R%Hi TLsS)G4uCVl[ M9ztmwiKT[kۚ=^e?m SܹVZliٵm U[ebz9>Lh}s}\Qy3t)Iamm:Ӌ ;ԕP5$p{ញy}Ӆq2Y?_r%f9cy2~]e}}$1# qمagjOWOHu10Xua Qyad/45!ϳ 2yTIFe:FQhdy¼u ~0I|ɯ4+)4yEW=?6M4D1ޅj&44Ye'eFN۳ou:nL} as1/my{]Jzބ~/#W@TXSadB z_''U]>Z`@clCǸT_`ع'UASq];U}ϸ)!cp{U吾i۴c^xC@_H9^1qd?%Ƶ\mX)d p!* ɳ<%YSQZim/~&և某M⣗[Wz~d"[0dr| OA`2BjD: p}0xa2VpZzm}M?*tTw^bO?}严0p0QI 13T"L2MePX@dƬƎnGjR]{\"7# :*2 !x D#LUzެ?KTָqU7i0JS7E h7fB[z2VyҸ6VL8TiɕyxC2^ϊw=$?؛N׫@dɑk_SSs SQB9ar|kcSv?#CWk0#xX~ Ǐs20 RQϫFӓdJW q=.HrEˎHӢ ß70O\33* u,i|wRjwՅw7gي̨ϚŵU]\vn({?ތ}3h~[AHSJ5sfeJBN)֘1:~5>Ũ$/^YhM4kNN ٔjCYUT˸&}&$']|M~aӤ_s뽇*<4R?ӫ/'3.=o.~w}AY~qg{3?FE!|SYy l K_g?>>iJқ$s%m L"7tgU]97ryͧw=Mt}Yz!Op͚5 8ĭ W ͯkOLU+h>k{WCHFL9Gj ߴ&\b6p#_mjӭ:I,N:EA]T\hB3ɵ/<s4[swhDǧi}5lCWۼC==yL?`=4۬Kc6u& 6ʽ~!n,K.vTcϦ>D5vfj7”&;ǪhFD;:YP˥æ/TdZgznОwΓŵeq[7㕕lnF)oU10s/$Ä ]'K)KM(9%٩`J ٷ7&Ekm%JIq%ĵ>L0i0SN['8lE!biNgH5fOA3V[N;xVд !ȍM&Zx&iI' \Lo(Ct^Ls8Bt^fK<A(μZ4(?N;P^(y=$E^vϻyPvtU:z!.|iݯUZaS\-&x#32x5sf"pa+lw]K}5|Qp8uS/ f0}N2*0G ֦ N0+D{-"#a:0- 9ڐ+ a'] ȅ4Φ;MSa3 A3 ('` )12)ҼGmƴaL,u<V!!: H[XA`Ir,aKNrmXѲ㴎:N;$]4)Zt& ;t۝daB>AeX`,5qmu SJmΌy 쟧X&zG `y!ͿuZWy kTeb^q`ra7iiF>|8xVKl0-@,/aQB6EiCb(>R Ŋôg0epO%M|I'z$# dzwX8(#21CQPZaJ9DŽRn&x9)v Ȣ`;X+\$JGƶ3}֜햪U)tF1Lb(إ"ul`mx^wDaKo9N{+1Dԇ؊rɐCRnbMQ96{N+%q.ʗs8v0<dl/=m4OTl:V3aіfp'qAZuܼd4!eʫ}' hq9SK"| a 0 sD4乡1[-ͺ)Y,;Q 7Rr=((`)F\XL(, %$تСa9$Ltm0RS-yA)ıfW`- [k^PiόG{\gd0DGԒA|, ~[DUl^IW$7@-1RK[ ɂs#AHCXHw& Q0!S&aJ;f2dp-Z/5^#Z֚srNrNvFsϙQ"be0sebd ˭` )6 Z6!0^Xőp.uXE0Ԛ)#EDM76Hi[Z|U,|8tntN<X)Rx1rؑi2qTLEMKy;w[y JmƓ*%ׄp13(&NgLGp7<`G~Iq}'a>7zi-mm}_+Bb0(Z&-ң@|VlK͙[~ZuiIʰHx6 nVEyeC5x χ =]nm {uswũ^VRxUꨊav?NK{ d7?﫤-<.St Vc$~{LFl1gA&n4,A}JBT̷WYJQu-!KVӏ=~?-鼱v6ѳ^m9+fhN"|3gx>U D~Ӽ,;#a/i czyJ?Ϩ_P}yYI86; SFw:.z>Mm{0MPC-&*#BHDdny'EqN=z*fK}Jge  S7gՃE`1Rѧeqg>q[-Oc`$ʆO"~ ,;PSM5֔jk4V1ofqa8*`ŴzOu ӤV6ZwֆJDy(R:MqC#b/K!,٨ ?Xn *`MVיoa9 ѻnCջw~xn޽y v? g&@ xV_o~ڽjFڪꫫP5m]mv4ԅ2w-` Jf?z5eԈ8P*ғ;Ӯ q?$>WwhDѪ ֞Qm:H3>([#p_o \U6_lԏ˂*IkY4@k$yT3q)4 qB')IǶ62//s8! hn<ԉވd8Gk}AzaBN0ᖺ=: BIiuZ9ԙxxĒlf?-a92-=uzfiWuѕN]#KLJ|r-+:Yo9CX8kD@'כohV `CcA//W ip.@mvMa1&ꦸgq2F6EȗS |[\xȝI= #L.Ooaﶅg!rFNɗRbr[wiIꭎqrdlY'2^R e H1|di"Mt3y!kJX2M{&h"yd 1Y,-,xGI4 v֚тLg8A߲ N[s\鬥ݱMɇ?x[Y`9G{MT,U sCZL:my+%eXd)J!b|_SEO±l]E#&soh3Zp(qrfϙQR)2T0i8]/!ZN*X>Xrg0,KHK r,DDꥦ0+Cm$]/ -PUEGs()Yw|ngeTyL?خJn?y-Xhm tetۨQ1,E'&6Zs }2bE]_y8^(e yH)cPܺR="*N":0*=w.ru@lpl=O.Yh Er{ ,x*;gCm6M?U\2 rWM:l;. "jo ZnLDw$60E@WeAt|5pVVscctQSJ& <:ZĀFơulBH^Jû; # ](WK/77Wjj, O߽MxMYy7K`fσՙYԘ܂_K-^ŮG.c^l7˫q +d'LԓՓhM5Q(3M'~?߾} }GiT2Ci}7>L 0C\kD2_Hbv0dj6hRÔBɨ}S JQ ) K^%T|8zyVABU$t,b [9λ]_O'y;$a 7_%s﹫Ta {J֯So~\v߁H S/:oR!]kӅU߾5S=Q#ĽF]WSߡe'6Y ntZF@MNQh>:[؂Լֳf[m>oKEiqQ]v7mT-(1S{Yَly-s(Q &1X7 P/DW!הlChW-}0%Tz^ٕܽ-^]ߘ ؖ6l]כAX!6&ivڲm^o*Զ[fܛϛ QW[1Oj;jjx% \0/ =t Pf!Ma\p݉F>Hno,--pe?cnx"TpG++u22.Zd磫5ɒe; Z>T2ӞS04ˌ;( ybO I2QJ PJf;)4 vNz:Ry `RKD[Z ?S;Q15%V[CA Rj྾KNO08ՌT!䞅K|jF\ ؇ؐwedwϠ÷}:fwtkhos,?!s̤H636F!J=Wt0Z x[m!J4)D;5m^xZV;[0g{&VhDډ_jٿBrjL]jSOg|Ns5ى҃-6 ZqKg:;|3!Z;Ubfvb^X nCyj6r?=̧=Uo+̓zسNEc8DWCOCp{k}/Oi]{JQ6ANQem_I?|#x:hWxz"& m_-DHx$ԂAJDh .Ng8+C:Yuw*rb5N-";7|Q,ϛy@"Ch|ad~;|\<4]G3pA`fAYς$3C'5zj&HI7!2i۠jߍq[|x,H3s)s,>R@VZɄ `!P-:k/E9 Ր֎:7؎^ˆ.Nj/z<~_- e$!Q;c0Q..,S'r"I0U* r`в/B[#`2ELYKfr>{,-] Lq̂empؽc Se T(A*pH+#AYLQSlAˁN8KN5u-!Eȳgw)W,NsBt~ eP+P6erZ8'B3OiH24cR3*lbNPㄅҀw}5Wjjί)<AgD'Hc\(VB1FkL$ l 6b:!mlF*S>CbwC&UMO߽MVz@%0vCrgvz3_:O?\W<ֿzwM{ui=/(忒c-K?YrܤХiL6J>e]/n'6YUmnE.At,2p(R1߄j־ )`7:L?,To5r&?:r} m::1Oy'|&`z GJٜP ~/P'7l,w44)ؤ2$Qe<ݎC_>u)rIcTb,y76d,E} z\q)_~>~'N<7dؖۄe۳f.9#FF[x#]d>NdBrEFMfyp7̈J79f.Jzˊ bJ&҅/1@?iL;,1mzDx@XWo:~%`|zLM$t&hFMyPV֢x`%wQQbw?gukX>D!Yku4 Wed&$a,0NZ. #]O(0,S4*cQe*0H\:SfF%')efh1,ՔpdXp[eU& #$0aTbC @ Tzi"a!fDFXG ޜ\ TPAj;/*)ȥ/9{p7Lƙd5*gŧ"ʀ)6qN  iׁ8ö<XS$IqIAwVBjczUO =E~胥xg;_k1k-H Y}vϑajlo% A9*|r8SJrB̀LcAzg "mq| s{8ݎq` !|ӎY=8s g90só8t_xb n^ uιC'i{2s$[<6猆ijL]jSȽmt1t(goCǺj[ 7A֧݇{3 n6r?=Īzt߀\cr;Z|SWGiV_ ?"U}&'̿}D]"F=kg~\"BDֱ~ޱyr}cIܻHl;IɽɎ%ޒCf|dQ~Oz򔇫ڶ[)ݶm4 gā)IgOklЌHI7!2in8Yt>3l:NM,.Ep.IAIIJ+!eqĉFQlZ񥨿pc}\DvKasqec/z<~_- e$P0 rpa95:a0?SŃ!B!6\A ZqX`\hk$CRMjLgck 6.0Ycк DxlcJ,も%H.q8c%s$ւ9 Z j 8mഁӺ^sXLݬ^u{vGa puDWh!6?圱5դH4Ԋ M\B_̓VdWoS9މ=1ʞ+#QA~bC`b﹄̒Mޛ^t*&,R\]LqX>@~@3T.W4\/tNBD +LJAE pat؅ߐ<44Ft&ھhi MG ra !(cRR,\H:"NLdBaI(!Ǭ~Ba䐤29ӵHMLc1>͂883 Eu3WAݮ?nu_Cs'JWzƧ'z5ӭYxE`lubj Pds̹ ! a,$;Zu+¥&LIxkZFei@Ggy%9[)AS@C0{Ό,71NJ!ϝQY(A ` `[p6:uOWH :q`UDTKQ25 DPX72aQhFQ]ǂVK"B* |=S q-k$Z02qpQk;oʻFR2^]~kC[ Թ6<`jKgswm}̑OX ,A7gCX)}WC,L^M 6M J- Btఉށ}(fz;A2>Kf-9 Y:"pS'-FТH>ШU 0UUؗy֋75\P! !*=:ƐLx +b`@t|:0],hA>(ށPr",QpXRe.eJyıF[͞HFת̰4N+ $$C{(?ցqhjӻ/ѤYJ>A BTHxנR$a˜^[a'{%xy.nվL(@@wcl5e&% 1 .#e6ڳH*XqQ/Ep2 #bL ‘1x7N>,8W 竜=>K.*kEjZ^!q{]v'1z |g*[]* tT8 s?ÛƟZ AsYUag_{(b_{Kd}~; B#L9BGgRQwd2\Q9xyhMn N[lBNT!jrrFyӦiZAa8Ic,#o"L%Dk{?4_24&:I&`cB HԔ1тFQ4`pH:G饩~eA_7Bfgx s=9oNC 9y-,5<5*kc&PfTЩlrD`LIqNPL29Jx4Ķٽxa8^ekϥePDM 0+&A`i/9vR0|%gS4awTaCa&ћET7P|$jQߝc& WIWh7FT!}]Pjݯp)\G>\ 0z|459=.~R^Óȕ>OOR;OWa|QQ&(Ί*u@ U l;Ի6.T.$) {8i*w=ɳiTý; {ޯ+׫;?i6>L_n諸 ߙ\e-|O`_u]HAnOӑ?>j݂fiCd=knnLcҕJ?qDmsZtD* CͱpEmgEſ=87 'vjG`tvR{ӗaA'`2Yo{e1wpb7]prupz_{ ܀؝I3ig&v냔W*,YVk g4}*N/y 3': fgy;YfNK7MQM 8̝oW[[Zb}7M w Ephxl//+q c1eqh _:xfX[ DS˒V{P;In"A*57lB4 2!O7w7r.¤r$\[ŷMNSzb7$r.}{5IMf1MLL |$4c>F̵;4\l'.AMӲ`-Դڼgy:6~;mzL"eLTYuܮh.vB9eȁN:X5_Nc:^xP9P^*7$o -CfE|K3}Imtɹхm9i*9-_(U!z &96 Six`,(U0xa23Ńt]V<j 1%JLe`-JJBZAZrg{yLKT\t򴸈J 9pRFM~HQW?fn*c֡#;tHd;CG& LQ}X(n8uq/Yc0dp0@%.,2%* YH/a# IUp;oƌƁZ1b"iZ+IʭcycZ:T,6ߓm.0hǶh3ts\ÜEhK9.UX,13;1eٸ ayߛ[= 3<{Uip| &,&->.jDcvϺ&I_1'EoD aM%en8[-QeJJ>JRg&۸鶁hRf:-֖恿vkGuR;Ggc.UKٿ$D/-ҫg~9MB RbrW˯|A=Pe@R8+2^R^e `fpB>24&:I'`cB {c+oۤs$HRc GJX3 &(儤Lnb{my+{݉;fS]|0xRKVv'K۽@͋+3o}J2Y2J}ȎS#@[k3?2AE'IL^ZڶT ^4ytU5ayMʮKi~w?)d~\\& _~"Umu3K|4PC:-QY@kܾ{(|^IY)\q9H/z вiˆ^FcBXYL^jʈhA #(H8 ׺+ʣ9ק\L4VK!/Υ9|fz~HiܾSXg&;vptp53 >EB:=?dt:,uR^dw)~~.렬P qcnq+FQIR'Iy$y<Qwƞ)%NapJHDssbb!qtgUWJ**A^\=FqE\HUm ?&//~~h m ;0 9@x9HKGAd U)bRaS!hY4FhQ 40Y`Ugh-\Ѳc/O_ %62REWs Z:fwOI/Yksn%X{[qs x1\2s.X 2t !Wc%Ѯ8Ю"nlZג"rkP(s(7[E_Z聤){dR{F &xHi:>/'8v#ڻٴZ_jsAB'_FN [kmAK\`zkw2{*c}m>:|L.[>LS[]Q0^d Yigut#4mxЦ-WVi}nYo5xx|e&#QHY5Xaj3jX$xe4zl5֪ͭoA8UW![2ٝ*uqkڤ.0$5IapQ.(qCvYS{+%R&ummrf}*3=''JSbq=:N|սo)]LxB})n@)w%6xPgzKw2kYϲvʮ,J-Vjrd2 # 9)zC8l :%*I% yη!ߊkCR6z_a,HL5 7(`pu/GP\9/de[<T^3A%ԇQH L#x8L$iARPq<*d,2%ژTQʶ‰XconjHF$AK\p>DI{Tg(nϜ6x-qZiΚӈ}!4֪: i?V18%*ZjSH ]U*B/"X " -ВET.vܚjAD:%R&YkHF[J78ŹpY˸ʠnoƋbhnS 1PKS. 1AxGU:ھjF3=)˵+ذ.>iPN C-9\k߾zަcC^+;Hڑ?9ƞ+aا0U8ZLWPJ Qٽ.fm N5T W_҂!REPDxb ')Ü:v:Ծ|mGڸ^>ڞvmuJk% Phx54rg<xxR)1ZVsvb-TM2xO(Ww'E}0 ^|ϡ \Қ\ |N:+,GrzO.Л6.ruw}6WVfxu8glL-;n;w6?jF&i6mewcepc}B{޲zg㭲dmb@wqL'MF: @2 *G&ל *Rkcg*P9!*jd&J->WNlcʱ  CmZFH.Djh'EK,8RQGpRSGNVo5zdrJ= I>tagUbk]oLJ;GfKtu׹]^ קld7cc[Wϭqr5N۬լ:ĖFo}s=ϴ\r&OW{ɞ(9n=t\; r3Ξu[s2I՝ D<9dş_ QZ&WV[=sEș~.5 W9x*\̖pDNDD` -A2H Ts 3!,Oo\kׁv*յ0354C;I{ Wtu%\TqCS;/T Aznl $ rTI@e$gSGMƤJ(eJ*k#b-鯝(ld{Sȸ2#lni,+ޙG.i](wˆϋ '4HyhTh % hV8QrڗiD6\fyJ2$G![TIty fHMP!zI2%( i80JRQk"h.f)X8F[k @eF輿ni#}2S=UiWԠ3Nov|hQdHJ1wH-j VTE' &PQU^!y.4JQ#e pgՎgML  2FΆw/US9Eң-k"̂| ᭦TDe '~k8M|fmP'Aē>+vd=*:4Sƀ ?".VdX 6TNWU1\FchCceVD T(!PT֘7<*dP4grc'c9)>t1x]GFh˯InΠx$wКeoZ}w̯ wWo*~~#Mu)-.W=o߰@2~gKaiZGҴ41La_sE;0TӚE\KIxt**U 3k?++3e%TIp!Sj @!z鲷#qx R"}wqJ+'^37ȜZSz_]##!.s$`oK\iOׂ&kD Ǚ-$%LS:lbGm@pdf,&CW 0eW(,)&#hni2U!O18 gRXfqT9* Y@o[ jJwOM`2OŤކwOJW 9( /Z)n!!2YT*`xR:+J#omKe :mykԃgSr7Tm; eis%mgǾ1?3B+r6\4Sk̩#""*i6)x9d:̉79*/߾?XO1`g"=04ĩv!ăʀ2Qf$; αW1Z(>$$. rX זPXJIp ") A.\ZL: 1'bN>!$0,*8C^ar.hT x. a1zU JD'ףsƜw <6Cؘ.0ȫ\6G[Q>qbcX"$a9-9Ά[*gA G@]L7)%x4SUR oF%H|ۖǀ58=Ԁ2̺SZB4T'R(vƅ[?0ETAt8hۏ䨶 G-2!`=ЂJbbrʤHdt.#5Ezܸ&3|Q$}H [ N jϘBsLecd}]T{GBCqߔ AK\'z9VyRe4U&|Wy[ɛثYpEU|=[0$̑pA51E%vqO.(<-x/q`_m䍣H''JJՔ׭*UAq8˚|P uY*0&~ZTSÇG<{ |s7jCN95y! L|D9q#>\=Qyn ܜֽqb_+Wݯ'?W>>>*fKQ?o[?Lr3Nn㨺'o8+48cgH t4 kP8f 6]u>"17Kd1ъ<*#GLiԆ*Q|'<}~s׏eݥ^>6.ݍ93E*Bw|_8~)竏?vE~#J HM$Iu'Ƶ_I͇8dhamso2js sɸK]}?q㷏~9tw{QV|,5"g+ `~uU_2 |SNϑfJ&pUeb}4MlwMO6MI2@"9ZhTDs (&fJS8Xvg$09}~h>R'+J ID \0aL8.dRJ*JMDUwʥNG;nm 9G@fr_֎Ճt~-.W;oZySDy_:NG6!$HP&EV'0J@f/Vt%oS؜=zQ.Pcwؒ3mJu$JSF{3 FڨdWe]ދBK(ϔWV.voR.A|˿Z?(Ją쿊^| 2>kT\p20 -r0:xE,vg3" V[$ז̈́+Ԝ&lL2ШJ֗SUI,IAb?YzćۜUr(gE4%[B 8 DRjF$WBļW&lbR' xPuU+P(I!FYu,h-닭Y_V{0[>[uSԙqǁrVA a%!)JeѸ9Y'!HdW뽽wUC Dң-k"̂| ᭦TDe '#GwY.vЂHJ;Ou>pirNxfZ%ka1$F)|0]hK/цZRi+H|Jp 5 e6ch$& :!Ohhl':o]~[1Ym,wu=9졳 bY_amx9Pr]*,Ne>bShXu5I;4v7if']N֣]d!zGGd^v֞4-@&:kxzg` IpK")!*p~ ѫY9090+yQv8xf%P+d1zU J 0q9PT(Pn@B{LnGӸSOQ@a^QjQ{o'6:*BsВWR٭M5 gсyoO}qqų8,hrH[em%!D. ިQŭgjFňkW'woj.W̖4As7H٧rssNb0t9ofۏU+)X+Z\L4/]fY#ц8 w0bAz͚xzUF֯:}UxVETzjcqJ#ya/X S)=p$w&UTT"N27.|}|_?)߿F܁q9 x(|w뛟_X[Kzӥ-ͧn6|uUb{յ13[.j79YG~rFjE Wf׃PWl$I͇^Uf,@E7CB41ՌTB%< ˑͼm_nkҝIgʜjMQ υmA044PR2/MV;#驽 SW!zu Vf49*(/` Fpg]83ɤTZ.JP砪ZjjCt=&_X6AXcsa@n*tΎ2]}#a,ˑDQM9mߡb %p큡> &N &d@'ڢih{Aڬ_ߋ~mZ];;O1[n_YS:~Pvh(Xj#ҬI.Rg:EKUN[$mŗ#Gp8Nfx!W$_F3d)Cy&KH  &xtO-f&(AsebI(Pz03͞GFiljZDJU}Ň* QکTQ(1DSKTz4-Q3s=%jCoյD}-Q)#D^ ʹǙƤk"~.Hq%Q֕(5X^fgJդ}Qbpa&Eh}βXpdvbHE'Z\pE"6.kod}3Psw>[UQb<>NxW%-}"}JE`xYG7$,t|G YCws`xpU?mW[>[MZ8b4M n,4 ]>]wGK(#̎0G uG+tτڲ/&j\-Vҫ E&%RV{n380=lIڶ2M7N7j?q-;NnDrqnI%U8r1%hc٘ ӎAh|{ D8yv66y{tk&e;z VlP3j3zR-6Lcv̘s5 語"Ǿ^T7 SwUu Ӿ}Wd<ۜ(GEU]awے îLQ(MIcD:d44Z0<* Iǂ]vLe Cn<|[uTEy~_ǯ,|Xnuw2hO'D_벏O'J&*k >gE;XrBPFNHw\<rr Zpq)%/u '5*,4i}9-NǛ*Fi&r:g(@ `T<j["-뮭eR7P+{>;?jGf?D}om3-&ҝk6EY)m솩)Ëq=g^ @mЕ6+B!uu`?>*1uvdt+w}On{|3pt湑!Z=H[pkusu 7j.9kZPsӡ۶<[m0oi^p/&ܡ6gulsܱmndZ@L э$ Y2Kz/ NsdLM{iֹe7Z89.F4oW(;^JV6owyiypjY+8+=߉yHXGtоr셄y )}^%TH@DNTI  Zko`4--Q#H(-GpH7ZNѾewbkl&x -!,{XuOi{\TA\AU !9GȟuH2 kq,tJ`U Jn)`߆.{95Ǘ?"SgrsK/nu] ͟ (m?U# KB*P9pϣBb x"^E" օ$Ge[is!XGcψƌHF$AK\p>wÌ*2P* 9m sGiuKJ;j Qkv#˪Xs-ldfg]f=/B;*)K!mʊJc.(]ԶDK^QqʺǴ|\CzG?Y uKLR;eo$qsᜳq-A݂nnW] IhA?7֩'6 j)W瞖V}lyxc[x7 N;ړv ɋ8p~x~>d աy/o&i{'&_Е/P@p䉖FA$jV#.A7DW{oWzFO2TUU}`NY&XJV4`HiA]JyiA[u: [Q97jWz׿ɾ[˛W~}MC7'߆_٨"|3i9ovx+\ş8O\pb M?G _uoOwhhGo_SRڭvgw nS|nūB iSsl Yכ?*eƨ5K8{Ƚr- q K'Vm(;9e5ǩij/phy< Y HrFG))pLLds*| W"o!F J=>X%54rg4xȡx ]G6!_qϠj3sLz~PJZ_',`ٶyu ߙ1cF#(~G=ԱAz ]qNXD08^߾/^jk_s ˉ<>e_}103/ǽ~w"5j7:JW'2'rޫ&~)cK &S3 7M ۩?ɛO~E ϋX{6$]RyL02~1<-)R#RnYUJ! X6Y̸3.> F_ !?kq b]_^l CtK͂RnA}0VdbI/AøP@dB`d:P=aU΢c+ƨ`" |ZP3X4KcILa=76pf$2A3⩣&&¿LIExTygl@]!gws_q֕N_ҹԝl0 EQ/,DKK~hW?QrQ禜1s>N[ PBZAf)X9F;;g^SMFCu}[St|!z\i(5?m(珷&z5{lu")u&r,sqF"UBXIHRY4x(()< 2FisՎgML  1tΖ鞫 D)S"(!G5Z$Ex ᭦TDe '# &NP6 u>X-XT')U1t¯hU6M( uFl$rg*~^E %ΥVD*%*k3rP4<39)hMj#צme7gP<;P1bek8g|֥zo7払H-ZG `RZ)D{Ƹ'!(ܯm+s'z )koC3\.T)6ʹ8D.l\UGGjPg5mw~Y yZwhӱOrQ &(AssI(PxEK'X [ZIB@ӹߍR7n1 Cq2u觗[Mchtq'! ι+YR<PEgŗ`o6x`:%JM l&>Q6Z8w'}Y%@ɜ6~V&W,`A;єk0l@SYیEV~)jY@(ۜ`w746x|:LSb ,Xpylem\O`;[ ,Q 䱆/X߁o]2WNj̜2Ԋ 'KEWt Kvݻg-{T`iC(N/'[mV"uPn^h[ nMr儩g's:W焷^;0 | ^6t1YmHLpe+ĄHL\zŎ. NfRn|捈!ACAP=I 5}.e Cʞ= ٳɞ*c՜HV쾣e\l37$poׁi*uqf I ShD蕠iFm@p$f,&C#o+fґ@4̰N qC 4+$PL(IaMQ娀>(nٵuy\ \Y)J\UʈƋgQ Ii XJ P84&艷⭵o1t2 r57_PgDzjhO.V跽}s"؁Ҧ#eǀ6&k,},2q{)E?8 0h#$  8P79vo#%%UI}ė?TgW-K`R &N &'ʌdXGhk1/Nh8/K l4~r]Ud[ 0i7{NZ* Km`u4Ń5ELhR /E"H.P|%y刞NS[uųWG(~}[eJ_$B:v_Ke7'3ygǎZjuD&S@릹" |U\ ~*4N}ΐ^đ)gQE UG'$:N(O+_Aq\ub>ahq~/-HFylǫ\]n._T|B볲pϗ&/ᚘPNܠ(pjTՍD}ᲀuK-싵Sme1l1 #V<H$0'Uk@hxNDlBiJQ˼4Yc{<(s8!0-ٻ/uR2;ΨɹDTAQ

Ay4N:y,h)MtJ˹Z#m_YkQ['|l8"5JMz0&_8ysg %aqg .THч1,"%GTIKNy MgkQ+&A~%j$UL6RqD@*A)j:ҹ-!wۢ> 'k7G0ŬUW^F &;a̎q=Wg;C[YږZshIfEC>:iS,I%BC%]dTR*!ZIXɻBqL>'[ zɀQ@/.Nrd0EJ39 }krI;4~8ksZ.|m×Dd<]pXc="s0,f \1*R)-f 6Fg%k.xsZi=='4=S<;YmQjUam4\ 6w^̢6a; #B>[F2ӳ #rôΈqQ&d5ZI.ڸҲIOh/i^u^d c ()bBcl ODgD t{&Dzr[s5S18ڐt]={K_7d_ϵ 'g,Rzj_X5Xc_S@{Y7{a~Il<-gc"^c/F:"Zgok)iy:z[vtIAtL^ °d\”KE%N9K[)tnϤhjR%Qhonߗ Jx𽨷#QOje%I2($IVp=$ -¾'IV(u$5&Ij-HW/Qwr9;;Ï 'Q},f2A~ӪVwgݔly+;׸(zu=yG͟iJRP,`a 4X0'b Z #@ֺhKRL8R !lK69BxF}I:hPKlEgHX]OM d%dX;s>%%RUR24H1$UPDP`jL.Z$]b'JP~+:T;n}3"8M(Pr!A2IM*m3oC,T@hP> #*g$1CJI&!#L9d=ZVf 1_[7(tp(DKiS㠰kPy1d<7N@I*cAJv(4P z+`tlJE$”xǜ=2K THҼ)BDE`.AeI×,: m<|`e)`IIav!KVX(1:2ҳpDŽ`-CP!P*~ I(T?/\!Z\gXo7*`%Boc},v@aXȘeU!"#1y><},lߊP@jtwJN!.9}JK[tZr3zӁ76v9Pgl6akwvuu3]\zzoyᾄ ߀7 q=?0{F*I4կ򖙞} d`1Yύlw^ܑPPl; vQEaJ0A+;Me\ѫZ*֕o)SMjhʑv"*ܞ}6 kị4+2ypH=(||KG()*UZʻ 11e]Y/%هrcDP@>i׵4J5 c-$USr'@['9]o\Q&EI&K"JDfM&-b,d 3#dtbmܦ8Yٙ)Mz;d֍2SFa2j>xRr| ժ(F 0׀EN_|L'ݐ%wb-Kb D~V(єڶ@:Apr% mdp=#uHmAޡr @1"dbb*I(3/(yR^'<߈Zi}xiϪ/Z5*NϿ5${Xo :fT/&U֢:͛o|oS,Ѫ1H‘˙`.NinRR$~RXxjG(!2c*$dQfqo϶쑩DG!rY84EA6hv/ZG*+rRdxП3|98Md}Z͛n Bt89=S.ǿ4sdslibԼ=>Ӄ#g4U`Ȫk89n튑h0:z1E̿eB0E!f"l 3M#[Y4~wUbyLl?1+)~6^.=dfqN*Yle$!{5]GĚgXYm`p6w{;КA ;oG&ǿvCû oi5/ = e?uQv55ϷZY`yWV@&RKn{FyՑ;c ~zwd^yͩ9Ԝsg -f6?N&b?nA54B:b&WH|0wl6wՒƵ~V?)M<. UI_{.RY谈$FE&2CRjYJe#Ke4 ۽ֆ>Z^ 5':uJ@ !) EZA* ' R ect,@S$O/η$T:aQ>~o\ܲpKa9B^l7^W/rѨ2 cd JN>g1ҚW'z֕=CBjSP{fڛ-Et|$Y'@^AXDVy麍}B>cJHUH%"%kOU%h)*Y<]'tbzy۰rgc1y j5嚤ιΤ([P\"2!Xj#EI.RghF,U9KnQ'tQRk8P(-Uz7XUާX+ƍ3mi©jq;MK޹**$TF ̊hX1TL˨ sv6̇(sTzdt`_:RRQf&!nV6;P]|qQUཨ4X^DL*{.nk:=f|__VM6;[=? @g0w?6 %29^P!HRA{.ŏ3k"?vOnLSf>D=z~PP:ʞ]WiٲQoFH9Y &/*~#擖q|(v|oja4ݪJ-ߣf>by>G Ы3+֨o{?Nsy{a҇^% %4l~aYh-aN7v7]-szd)þ4I$-54>/RӦ#G{$jVޱlv=ft5 O,M*7m:i?"{:60mzQ)4˺'%՛v8aѝaf\/{$LZ{-v4RSL1.5.'y$Oe'!>.c >eP Ɛ pMEJ /AoÓuzygFgd?l*]]JU ΟLWReRDQIETE~['Ȁ%K$B"W\;@J5R-)΅sZƵPFFɍ/zq2*3s1H`v= 8[('sKZ-!{|({ q# 2OպTw4+-T7,'RD0di)B%TN8S%&zDIsb ## aF J=>X(3BDN Hh rq ģf3 Q4NScp@9`3 RHF%0YíAag9Irm/@k3f}]y׮4\?y}3ܼ tؓǼͮ>Etݬ‡w=]}ɬsH{sw+ښ$KW~O()S7޹un]0|Y;n j>zmm49hogyOMbo뙚4/mws{A0[[^sϳW7a2i@.PV^ !> c #VinUY:w~;&kص&MA.( ח`]Lvq;03X<& 8ٮ':F:!cbU2JDDDE ^ ḈɸF F#+ DFtTR$H 2O}zB 5PDpJK=_E͢vKbw{_͓-;}A)j E&܈(:F V%K]x%GvC`VhQ2JR9c!q@9m8l :%*I% TiXL~˸-ay-`voϾ_&%_7t:>} ,v c5OfFslѸT^3A%$&DŽ Kd;-vF;0Q KB*9]&ϣBe x"^"J@% օ$AEem {}9بcHnjHNM򠀥H.8!(Q{@VQ9u6jڨiԴAk `4 /*b 9S8 2P`M`2벸T %.Kge)3>`f2d4IҊ7WYJEFsA*_} #s`4SjX~K MHr 7/ W"_Wj4h"|+j@'N |[Հ4 ( Lgq LgiOoBf#i)'VimIگv.Lofz'ʢ߯+&f6_&~@?Gnߞ<چRof.SN(>o~泪a_&[ft_wD*ιYRVBHvZ ypNBekͬ#c_ϥ8 urld'+4r3s% ߧǠZF$]J0iFyPZvk=ww$ٕ %j\Eux,X"C . .ŀ+SnOhO&Zv1q_ٓCM>>g.eQ/& 80ǃ@$SB+ DVHu m-bV;ky13Ԯ/&Md_&i75P}]]Y,|b}S"nuኈ'~]L{~lXaoC1@Uv<3lSަ\Of^ꑛ.^' F;#gìq=࿹Ļu'$\i*%#M׷] ]qگMos*H/ ) ^!x9y-DxyH\DNyEYIƀI᷆*X1xmPNY3C2Jr"aǼ9[z= ))F#hJ#ǀ Xg}(1t֡喝L+k Tr5Z뎽Sk k 8aĮ n&uC7x5x94dcd=PZX# >(3(^#|F`OV;Ku֑uid dRHM!R"EБX)*HPR6ԌRr>'QBJ .X4"E)"p5$ F:Ԭ+r6KcJ 3i1̛-RDIl8xw^{~5.55% @=IӪK&ϥ}Q'56f#CcYTtuv|{]%ãȷWFJ )<GБ+seI)CtQ; k%(y }***[zTkþ/}=f񲞝 <=<Ĕ & y"jgG1W֔BJF,X<;K>yuΔBX4T'%ǻPM3o8`t\H~uXtYQ`D??ïCGx>fe)S*p)Q.,I"H.`\ɂ gnU](6 Pt@%b٘c*HF(3/(yٚ䣵'>D[>]?&jըku|Äχf}Y4:?goZM*g*>T_͇㧓 ?L&?瓹=>5\5I8rZ?I\H "Ym_"`YjN%8=7yLS?pyw::͗[-{UU 9\ U结?"'ϋMЇgK P=2!!lr9>[=].:4}-6ʿ>qQs>è:6c?}2{ϩ;?Lo|w;_ ̞!Xu,a_V]1 ?Ny-'2Bd'{jFvZY],?1Ӧ<':u^-&z|j0Y38'{b}A2V#6vdԎ_G6};'夊38I~ҨQkg 'qpsO,>}/?|_ݧ_?|>>O?8XRHo'o"@7}qg.]t-V蚠Q'|~MCn{?Sev[߿ ƳpEO=p5Y#TU0Wlf`|(_8ꯏ ~ū4!V\bK9ҎPAxfĽ$Cb~M, v$?z" E6 (;2@rQMq2jWv$ڰy͔}u5j9l~H'  ' R~ӱu:9ͱTXlM<[gV^-G;F"A/($/Y":i|FN:$uy3)ɬ@Y. )K_D0d>v!9aqyZk~yIfQ@kWto^ Y0{K'_|,?Փ:#(1SIijȬ*raHkgBF'7aϼ0} ;gL P-E`W`~s F#@W>^1$('oibl1H"7dj,eM*6 4(z gGԇavLGEAehw5UsE BJٖs )~(JZ%!e&2PiQfoU(?vFΖg?cJwLǑs݌_D<t ?s3;= ~rQ՘(gԪY 4 bXVXCLm,TB": yY2+ޕ!%zE<3Uw]6+r6s8]_OJ޻>]uYv,Y 3ރ~8 ȕp4/aJ#+ yA`ӧ~>BC6L\l' ;CXm5![h)'=g-Q,^h]@ɬ!& F) !x/%V.v*8ˁ٬*>m!kS=] k1hǛQ;H~*;%|m{e0Vjtx__8R3Ð&"<}$*&C%n)d(ɶGIT% fH~ 9<B3A .aӥaQɢI9g*M+( rHv}>yZ&-d̓Ro>mozzR"^n/W#i~Cs&ƓNՙ=ۑǗ=0cRGUZjOҥHmfRLPX"("s:7;5}Κ .CpA3Q/;ˢGwl쌜-=fDk#~&iڏ|AՏS`Sv=^hpy|{$_5؍i;mV;||޾ɴͰOD%m|?iws+Ztm2?;tOO_nfOos8 YalN'~]7OZxsS-Z^yrzN[-WǼ͚g i48Gϳnhxs(hwjʨפY.n%Zj~pr>(~ n>w30u[z@9JܹnLe bfa|"rk]9zsrsYA^gh9ΜX.9-'NB '%IJ#UL6IM*a2)CaZjTIB6CPFaA3S33Ab|DAJac~]Eh.Vcs1$Wl2*"XCܮG7#[ԞA㙳r5 Ȇ"cCH^h(wmmyٳmEoa6`8d/Ȓǒs;[˶dKV8VwY$UdW$ [et1{J2&fD`$R49)*PHtTӚ1r֌[){хqsulXb RnʎՇ_aӾxT|u\cIepRB+HXAVShZCʤ5L1r~ЏshƸZ[5U[ U.x#yM%B &5 'n".bdtI;Ճޒje2f 04#$ YiQ#E|+&Z$Mp0&dlM+iMI;j ۂP4y?GXJ`%$63^3mj%׫1JԦQK\S͍"$]qrZ9B1Ʈ\͘Iap=4gѲ[jƨ_]Iƒ5YIfRBҊDVc@6h2VHƮS(g\Y{ѹ+^`m h/ j-bk40al>ViiT}7(|)BY%*MRȱmwyf:zu m[n$\vdǺ^;ړe@qsD R!±T^"j5CT+/5+/uFeAVGj<]Wˋ<¨@~ 7FG|'۫<##rK Ҟ5((Klw0bÃO 9Hj+5ZK>Dǜ 2-QEmK*g|fR|if|ކ&2@%rߵѢ_ :f@M,Ubܙ@zg *a6 yk-E uaz0K5D'H!)r'Ik $ 쀸uq~sS?ϒ3G` Ga])3&c E*UU { VFc&/6  hea>?;ӃqA&[< 1_.YI#926'&fYgʣ4!@:3J- <$eԁdtt\ʕOZ-*DzQxuxEsu37^[ubram-[\/~u/c+mu7:b5lp{2q–P.nrY3|i3f- mLAh(U|}ewH,_*g:VU3:,i*#aIu( w8 'K'AK^'_H}x_ޔo_7o?7^H+_hF)e$IF7i MKhד_]dvovfa$7_~ǟq:>)N&ybpuݏ8 Mb~9;5UGp0.cB _b3=>2~=7$n ţ]kb.ت_: #"ɴfIK@&+,:PDVMYxPۭ݆Ϋjv;^70G(ȾH:j E!t^b G6ۜNhA: P*4Ϭ2cks69>dn;=ÏkWK_2RcJf&RG̍bK23Qˍ8df/0%3YCN^>]7~Dd(S!2U W2>]57=BBqQ LV큆l>" kV[d䔕QEfP=HCu,B,pD^\I]f)Cp鄖yګ#x=8 go?-v|F57\-e_3<^zSe7N%x:hEy'`*U.,jg[&l8߃7M_f#OlǷ`W8w;r v;f?1#YR7.\1[̦{ףK~9iʗ =ZRgxDҦƷΖ df<4<6I; hr(PZHoEL!drR)zBl~ȽJ09Ygg|LDdV WU!OΈ&2^0۰9+U_{*r|=$,|;Vӫ ]wVX4BwwbnFg]}twzIeA䢄$ ]E:2aJx 1fY-)e:l l |6qKoP‘P\U"i ^hg}$׃(/ɚ;ZjHا^WuEcQWZu0CWWʹ]V] ue$JKoU7?lœ˂\?1&coʺ앏gbyr :Q̬\41O ӝ^,A.rEӀsY[Ce' ĹejKirʅYb JTqjMMU&[iKhxhxphbmNB ϋBVكK$PפI\. wT@oٮQ-vrA ZςdYf}z]8D0\wFح%'L;C _э}&^fNrAjd!7eo侀wiHvXxK!PzբEONn,Q쭋Oօo u{b77,;(?nC>C*l4t l"tu.E#\e me߂a˝e)8kdalHEvXPɵ"z%A#A> $G&#ʮ@ "~lR r>L/Nm$I$^% DˌB 0(;&Id'''1*P`XYA+ c,1@&(% }JV+Ȇ{eUKrr؋Vb/Xsn뇀eYGpaʵ|l~2NdFUq--IB M (9mpFԩ3fNDfKla^[B 8 DRjF$WBļQQRS8c)V' x: *Õq`DЌa, Z ce9ܕٹ|;}scFbvǍ ΗLILC ry r,uCsd!@]|7ԙqǁrVA a%!)JeѸ_ sP)c$pPԀ;m<8v<{lBd(.(CS0$@˼9OL<7)aI0 '*SU'n87݄Pgz>+vd=*:4SjNq1ʹJ&b Έ tZTW()X4PX\*mE oB .P#!XcJ%!C#4i erc'cs޴˯6B76lpu#h q~vϼ-9P&.L4 D ѫ89q95z/{U& \FiCXp8 2rl9c;gDZNr:>>߭i 7<*gh*J-*uﭣ}Ģե b|>--"!T8{u4)4 xb+ IhHEF $kj}ˏS?8^aWZYtMD*P`P n|p&r4U}??97w֡2"c{A q93I -$219eR$B2g:FjvIʍK::Ii8 zJ҇BqJ: XVz'GeB<9=*r_gFޫ-z޿?sS0>wo]*lw~y]|]a'y{?+.:=c ג?0q{+d}Q@vLF 儝ݒK%Ǹ?eo9SԟnpW%ol$F:w}W(8g7>{G*:'vN?nǿ8_>ߟ_xe{8Dl"O?"=x4?547kxЂ9z2jsu͸K]}?WqݷsrW'?AkN3|:f_Ao7e]TTm*ʨ~{&T"D ˑ:JC&|nr5m_헫rKۨ)"\M!)ɝII#G 1v7*RDF$Km`u49߇5E ň*Gy-2UĶ-rsZF)a٫.g9,βk>c朵kd^+hiGk{M_KoJIadY7ߜ$,|| fK8_=-vEs7uӢeB:| wǙk8c̎1ǘ+uǡȌ1 3Ec7^^VLSWj'mD=K n;03lI۶!2O׍w_I'NIʭ;Io@ͥq$eTޗxIwioXwՁIwE:Yhwl6,p3mLP$ap,Ȇ[mP-a4{̱t"ǁoh.wڅJ ::׮k4YvYZ󧔻cP6rtEtwxK!_zz݈:ҍ(EI8/iճB:/ ._(م .PQģEJ!O"e$%1fuvάLL ɀW)AW97OoUzuҷ@[q/;ZS 8ĨDB-MB0wItOZ#U!2{&ߒCr%77VWd T^.Ɖ^ao"?[w-sUOp/Ӿ^x{L|&1"n:83qPAmIF6R Ra&mdWapo03UG?qյXGø. ϘGR'T\11*r򼗰_{u_xnդ~Lu4Q˞*_?˩P)8kd3Φc*a8?Ǖ-G8hUxV1}" y ײ ?HR?{ǭd@_^ -ˀ?dllrY\`qOk2#Q.oqHVόlluE6XE9e/H@r{Lu ׿ub= .=p%pҚ Ia\$kIx1&AbѠehc θ/ 0hgkh_v@P|!qqAZ,;% r0 _#=E$V0m/r9'?.'|Php乹ԠqRƂKƨ$3KՇR.>v0b; c.$C_:Fs^4T.{/@9NstJ.%%37x6༷gȣYL83q6dCGxlb|=ݳ~W#ܸ8^.N_p/\3aEq,evͫ_L!]'Jf8"]s#LI ?㭆J.xft o.v~ۇ /޼Pjtoqw=^s>7!ws5/Wq{aRRkGח3>ſ)s?>^aX(2ӊ*K[+0V-.BF7.R~1Eptj^Cu"anD>%Z <m*~NHM<'FRעϓ9Nն݇;'%{"皋Mj3x(N%AZd2Lh VIwAi˙U)E!l]B[L = -1f&tAؙ8U4+uښ1-i|͓玾|ﳻE}V:++kԎ4 +B9H CفNe c:eLHL@)IѥhrFST2-cg⬷fcmb['мT,--O'b0i FCzwl28eBI'KJ9N%敉` *͝ZlaDFș ^̦P:( 64Vה9ڄV2Y ӱLۏq)<xV[mUv`W)J -zl6d-RFo/9LQb. L(f H@SI:(:Ă≛q$EYF(Nڹag⬷Ӡ~Y ]͏ZDݱE"V kƔF [D6+ "j乘}0TR@bB #) Ů}a۳ުV55Ykw R QP%Ft7ߌ0 ԹsFl[9D|s9VjLZ}9'8=0.('E~is9mg,ۛV=Ċr՛p(oq/@aM:Nctv<<))4\j .%E ol@3^ 昸W-رt&Ά )+hP~%:5=җ 28# N*;WhM!g_tY6uZX# 9` :y ֢1.1"3\2]ʹy"zb$ydYjm'!vZ̈́fp ٳcwˋ"X]=>V [s}|6W8A)K6)`rF Le NErchxWdrrj 2ʳLmcИ 1NEeO`ւKRp%wy$YM {hU }x, ?mk] qGp$䓇?ڑݕ4s*Qx:hEy'H#oZ4:* vs afzRu5*}~y߼Kd6Q`B?~bgq4m~x򈱐%ѳEєM M䙑A Qoixy\Z{289L`P ɚ`p*9K!ǽ!di ^Ï 0o_G/6o~an\}.?yH2 3ܺޞܻ>d/"+= ZҸR鬉xDZ-oDV6ضLA\9Ir4Y$eF(YWZHoEL!drR)zBlQ)& 'O?c"ጏP *'gD P/8L Ӏޅ|<ScOv{%}9]wyq6{f)eYn4L9(\$m2@?N裂$,][}"*!Q:pI3'NfA)1(*uYY_'Q!mY7~U2U\3RiBX r;EW7vatR(0mK $-a-bOI?٭iYWwS$diDN-9 swGpN0}á;y^o0zwY&A\YHuƃp< Y0<7J'%0/RW$X[|2R֠އ~,T \ĵ"Ԇi]02X "|NLc]ԈL~Z/N.>,_j@1,Wa?:VVfo6N\:qFG޺QT[NOf-FuUΛo//^-;fܟaߝV۱]$ Gß./(W4nw/4>5[MY,oIicFbYGb'm?/VtV9[ꢓmU3iuR9BGJÒơrz+f7kO^Ax9?۽??72G߽z7~9}N߾V\PDO' _y3xMS{ eI.?{WH lqq05= ̼lb^Q֔,-ݿ~,)rJUeCdF A~|XEj˝ ~OdxN~ʁîI\E] .&1#'bF_P7Lk9H}0;dm1I\ tF5:HٜIbd2#i͒F8&+Kqh@Jc%{sl1{'齣 K.F!o<=urtθ19P(С m9gC?ԩr3Iӻ7o+k xQf`kNkA|/[v҇^a>/ >Fk\NQy*EhGlf3Ȏ/(LooFBy"$M0F88 @U>zNdC()&OD6:1䖭ZͤnR]7kdf Jdn}ô 'AhSߟPHq$|U!oeP~uaԌ/.B|R=g?"%LvzIf~0)_p,`%xYIo]3Pnsxͧ7_3ЈӮ^}8T&OBE|z妓rs0,WM{y* 7-tN Et/6-y,j EtY}Z{|߽d `y<|Y"WG^(Y(6V21S K{vVWmN , "+5t\ŠSЅfri庤垄{M.r/740|:0&i/ъNLS:;.# -gaɖ99ܓ.`#t!h1$v~8^I,bv^en݄w1cJLfSq7h/n8ռSTZ۬ 5Zo4O>߼PN̊'J @.] ,0)+GBn.~)}qR-Ro1S|nkFEEeP4h ϲ?Tx=v.^uߠDA.,D8( L4"pxJ/V#RcГmsGn7g6YNXXN%IAb5L,O1L, F{<7)rw9IQ1׳bY7wql;|3L݈.Ͽ5'lPdԀqbcKƨކ;Pzxz2Kd'K'; x#9s^0\ȁktI:ρ.V=)fa۔.b:s(Y,KGU>FΎmsܥ}wKKyq'#~{t6[tk"76//57B۫nAC.6eQeGW,ݭʳu5sn]i7-ﳙ(+_μ6r3?^_ooyu{ wyv\閉KW㵊kO5kŒ]k럨+>Fr؜\s_zH}s.6yxnǿk]`,dfقlٽys޻De3ovO `fvu<5EOI)N/в' KQ2K) 1so"33B L[\Pdf#g姘<$$7D4Jc+kL\".ng9ۧA.'TSN{}YT5hGͤŤy rh|!츏N1 D`Uͨ1fYIS,9Q)*XFEt)Wq2)ښ95f+(g .^t<ޕ8d $, k^^ƍFF~gO*s'LJ|ɬ [TQpSC2ye"7<*U542-C\Lz/(jS*xQ +jlrH]cq\s.qǾZ[UڪڽNQZ:a3]䘸YڐLgj3E^@Xn\`ȢL>̨ SI."/:ȤIĂIp$E83FȩNڹ9̩_;ue{kͨZhRx:)䮩 *XpdK܊tUL%-E21| Js5\>H*)N"iB OZIJ\mTZl׈O pu)YKՋ^4^bU㤏. O`6ey}5}&=i,&H4.%.45gȹh5و>js,T#gnj&H$*٬p| ?II'mJ26kJ4Kf.,kCm|&N^x#ti+e2}|sي] R;oquʽj~ݨK2yT8LIF%ҎDƚV|Ⓙ<1γI'2!M1d퇮ezs*a >)h$Tt5Řx&&2/ qE6G+jD%%H)}C&jMir X=/njG[\P4z&= W[;]{~n;>j|-cYZAǫVȕpn֊txnu㰯an{z[+oeC4nDj<60e0jtHoz{o%~7z~OV(,e"rG W:3W)ʀJT )i],#f*CV s[.^f2egP >o&ę7;nz_\m##P|y=#yjFH]셔Ӫ-R͗\2ȒFSdF?i8-E7uw }wXC=B':T$`E @1H)2jؠ}oVIC74t@?Xe'M67kY3GobZ3UX$cEW_+\3Jem Ci}RSǩb5WkoŹ^IcYi~zg:EKVL(94ږNo);䁓P zz4+>ӎ'xshFwG﫪sGzs J20֡NI FIX3q\5P,U %5@!sh88ƒgh.'G$5 @kq^~s.8ϒY#(?TQ1ä"hi*0=K¢M^|سqؿsgcWxznDɖ! g̗ k% @e `6emN W[ I^q)B' 1jɹt!q2zdtt`qBg%D]RW'7D|w57Q~P+?_5Br1y!u^na}\q7% 2j LZK67yH2uđ,II3n()&8㒉vvɘ gn2IW~t6:K#,-ޖW`=w]*M`PMz\%^i݆,<\8(|կ5Yqh͢t_͔lyb|zusQñ cp>MN?:SVfWv~}`~~0ʳ п=b`>YW~]nza1ry4k$2?wN/66cm֙ mL~,Xv~Г{ gE]|CݵY53J]' #-e}z}w4΃6\O:5I5&U7l8_%H}_~|?~ϟ>sa>?8PRHޕ6r$"#"~^c`;h,얛-$?<$!2I2`u7Yʊʊ"3E([?5V-n,lqk-z>״|i0{oܩ)?>}uj<+_0NO~ݪIU\]_,j|h9H%w? z3#OW ZOz{?XG+ kj(<ɀ@КgfXB.JVSAhUImX<65]qxOΒgBjۉlBADRH *V, J)&kkM"LPg'nIL'v9wwhe=-}\/&IЇR.}}34gAeQSe@%=ӂKCS<s=RR'!n}2 ( P1S0#CDIK4b(hOֶWms_ZcJ`2*"U% QW/<BNQA:C8aѰ[Y/;GןF[tpo>|=]o}EG|bpt7Df0^X/3b)!i Tolaq>}GS a e.0n|`#2 ^1Y3Qre C]E6{W⃭é=Ԉ2y=ŸXMΒ2n 烝s~0}_C$w]vŏ \k֟:|b̕6;gKcƖI۳j0Kn:i77B"'-ןDŽEiG]~y* [6q{<]NwoOXfoa0DֽVoG6د&|aZۮ^-nYKjk|6 HV>Cl<r]r{OΫ5~xa6KO4iװO4ԁc{Ddͣe c޿pOPU)^_pu[1A6m]蚴GY)㙚t<57Z,NxJKu:׮kb~~٦B@935Og6N썾6p0d4VLv=ډM):Kl$hA&{6TYUchKtcGZTv*n:N[Ϩ3>j>D`m_8ulQdrng3ECD pR[f- ώ,Cegcupt'ptt : @Zy]EJh%Ѩ2 /]FEHfRa.$)DN M9iUfl4/涭aKTh]:&9= G ЊA}3+]1=^=^E\lXVqA lVV)MO;igI=sgH`㮪]Ui")ktWs":wU5GWUJ'zw  nz.`z*y/~^ ~m pGT?ru?~{1Nf&Mw OPIϭ>+$9k妫x6AptJҚMF7\ {>MŞbS~݆r9Mlr*Dž'\ J\zpΤꋏlΘP"a].}ӿ>z0\D_\₞ufG׼Os>fk U,,z rNJD"`sir^GR=FD{w`^(6dۻ+/Qꌠ" l?Xŕg*V)m_T4#?sW$0ـٜTiMK?/yr KLssсJ샋_Qb+ <tn(r U>o.r*zcAB`K,\2dc=v[!HH"`rBt *KY,<~L>>$S /on'af+;~_`> 3?',o}ϙ}ݦrE/_{O`}v&0_F#u->9P9*ɣrTNd\ɦKE%ZШF7ڳJ{mVQ^@K]5b~}ߕFi[nm '&ݢS}YJ!3$sgku&{ wZKZKF'TprV&]J(E` )&+@X0!GtQ"}ٴsXa s Cp YxJARSnMG7l&ΖLfK2l\OFߦ0Js;?F fgxV00~7͸[Ks cr2a*ÛtpurJ7r>rh3*m`?,㭧\0 ֫QĞvyuh介[p.F^r>5-G9-t"JaSOK7>y,El4gH}yJs|jbsQ ; *璁@*% S2*́7ø}Jw]v'K@+<[`mK&uoJUNRFS1`, cP ZF)lzr jdŌ&o){EnHhTւS0䠙8M4QQНGwջ@;LOΝvHCU+^Fۓes0kA tp#Pd̲xW<삮hg֔JjkSQ$+,Y 2l)MNZh@TkL͞q=J9/lBl x{4T.V}8NՓ?`epu|.#{EExYJ "Ml@šNRSӫNj.u6Ӓ]mm{ND!bWa31iˊ볷O8}8d'"C|7) " sZl= ,O܇`pڽ0*9:\iiU߇T 3?y#k|b%(jgZE|+3b8򍎲o>AQf\PDK!qSK6JH(:hrۤ^D!B9j ]hGOE`iIټ uN>8flɵf7kLrF{ 'ozzω O_N_9&%4: rUV Z{s  0cNg啑RCHRs9IBI)pQ;6A=%AVTq* ر=?HglG1O1}B3(`/W|fDODX0>2BРgwq5p&]wNNV&;EoWf&nA$5k 0Zԕ#4H%eIm"/.ynpp-9FZ]VI"=i"O*mvOb W}vmeAL^rB5%k)"u'3mĹQh9 )yR`譁 EzTݱIY3C2Nxj]L-YC.Y}b8:]ko9+ ;6ŗ"7 vf0d_60%%',VKI~eK6"UXQ.FۤchCcsQ$4 % S.ch$& :M.:E:v}N|RSڋ^#j2W,'Z\RC=Y`)w'HkQ;@ٓV({^ ebRa3w_;ګ@$&2dVp( {!z4{l^.gjAhB&NjسQ|k6 WSp(2(iέ1睳j-'B9:twгq^qWF|ko8k0(8{o'6:*BsВ1 8*Wۮ + )R'J3$,$m@xU^Պ4ZT(@Ad<A\[kKPYMD*p ,s ;FB¡/ETA4vm~?Ψ:L8BX^PB\>va $PS&E"$s(R 7㒎NRBF!PRn}F&`؀U^i>wNr,9?UZ|ChߩV9}MpM̬i 'N2mc>VԖ&,>b|^x34koIrsշsD`8b$xoֹ8x,IҚ@,tY5li5!}̽g9UxE [^+#k6rYKڪJjcq#bs_}10Ul}Z '457/UTx1ev{g'8޽?߿>O~2}gaijr 7ܽ~MUc}Ӫ;T-hn;..ޟ{Ս3)Q ?V\|}=?F0if:_͚2z0}F5? Fo_HRQF1!}1Y <4_&6]'cubtM/݁$;$Det @hMѨЈ:@( &M)qKvq{ajF}-OʼpF%M.SE \0aL83L2)%V}lt3㋳mlL%==J G:+c*sxTUƹ Pi#yOH&(ۉⴭi"ocXiKk߬olοaܿ9V#R߸l</atT9{$X`/eB:$КJ PrSQ>Bm*؊>X8u(hg Q$B'OLpu2'-^G %p!:KQ8BD!2L,h5T"= b:Əsǽsg}NmJm*#q>8;}nMco/n.댈>b|2)3Iz< A%mr9!)ɝII#G1n*VDF4$Km`3Oښ"uI{fR#*ybN7ו8kB R,avog&^EJا/+gD6SryP(MUo$ XDYWHk0Jy4bGP :?q?U_ }s1?Բ9jFTA*Ʉ SwcN{/tG&of]J%;>[y1a|["}5.w;hg{3 >RJqsU9[VVƢ7y/ rSA%ax7/ >-~[ZXbk@%ݺqb^>^6h,Yg]4t^qwY!r~r93Z5CJL$ҔSygLD14~,Y˲/bfYw4k1~&3/M-& |[7Mײ7J z*H Uu;hb3R W 5"VomI {C7*|8 ؛}83 c"n)s1fW#;nu / 4_Y0_ > _p5oi ͬ1(Qa VmFM }ꝝomt|;A6:`D"A II fR"J䃒QE$,UF+B,BZ壭UͿ@flh9x}LR }Ӌ>`Om"~Rm_9IdԖ ZuǁwF)c=& a/)!-<ʔ R$Q&aA `>4o vF@ʄўARD!$p@`8U/gMurzmQ=81ZޅcYEW&_0!2 *yH22AQxN ł b6bVSP æ qF(*uE% \W! 딲'</@*D` yG #A]4HB,Ă1Upb'8qmml[b Sla-L.֊h񼟑 xa- na-,‚[Xpm w JTJYI{[ρRyJŤR28?SH]H iw!.݅vp03A*4a?N~6]aH'~H1i:RH5xbj'A%`灩EW6ip0:z][s+zKЍUylʾds*/qpKElb 5\%Y +1wepR5] (*2YrY#K88k Ngx.i^]?UߡށNNIjJa@&NY;{y1)x̔IG8M ,&]J !->& =+JMcRL>g Zas e!S XDJ`D w,,8&8;u1ťe=|9J|N? ytdܻd3`g雋-Ϭ^6ы]xGof C5C !76瓨 ׾iHX*̣ۜwfQػv4:4_=L'ܹy>L [^ݞu;yO{s-N>_G+IOlz2ej.u杻?ky_?}#ZD戼vmV)6R/&.5Lv|͊UfUZG/yLw~dޅ5XŒrXܮ㉝y? 7Ȝ ʘl4 J$P RBFBƦL8Nм(Y+g=FZA`+8ۗh~W 9C!__=<7?aOϟD;zȺǂNH >D5YAy*NxekE64B< %V)d%IFMer\Xk85vҊqy,ZWkZ[wF(Z l.#sQѥ3BSL'B$n`[ȹe6kRQcF5GQ$"xX6g>K.ҪǶjDX#FL: tAdsMddёٞ\Ȯ )'$hKeAmmH+ڀLT=Sk[:k%icdKWU=ǓagFtU֋e1:͒}mm׋]/n]mURI#M6Z)6)A送Q)ȌX:^\cчfǾ5և{z0EMQq=e8$ BXa哱R'3}lhqqPM /iDepځH!d뇠Vb4 "FoܧGe=gh H̼imz|B") QZe"~2 r_e[,ez̆ 5صD'q:,aMҧv[ n*oͱMfm6q ۤu0$%:.ff{zXYwf-9- 9،+bRlҥ2ԊDLVD-ذ_ȹ J0fu^/:n0JsEP}tBE"BWs5 ` 6BЛW:&~;WK;>Lyf^ɵ>>q!T{ r? vs ö>i4f2A}_%ED4 J \M<[ugb) 8K '!'t})sNQ;D̎E\U : lYC41@?d ӡ<77׶W~8Uwe11@>(uF3_%1,xa~6XF+ d}g@IF2X)xmc2*gdDjLIe ǴRwRBG*)/H*z "ɺxTylc;`ɍP`Xuu&_XYxel`& sdmDIXx_ o.ru5(XvցXzUa@%H 0:Y*fJAŒ-;56ɱ4uw][p3@/{~UkQRAbi`DT`'ܑ;0ٲ2fyfP/͋ j%usB$B$FNպEb^QKL;;˰ߝy-?M+@6)<[]ֱ!(/%Y-)CAR =|S_ڃH"Ѱ1]"g$STtT31CCX ܷǹhQ'8=b,!eEeU'%ǻP=7(:\H~MtYdItnC5>f"e!BcvNHhe:I\q% 1.JPl@R٘c*>fy_ٚ䣵|R;}^oUU}jԹ:fAnM5_i"ynjz֥b_eJqVyurrG|gջF SDKq2'e{7'_5t kqh^c24_ϧVW kyo˞xV(D>ׂDal~X\܊>]ķ *uM@E#i;{?\1{m^J=TVB4t7|]ӧI+m&Ow0 ɼ9ק'۫9n^fcWQxw>[vIt2~p3Hvp/$ĺ m-]k64>6:Y~Eh]~^͋ sU[]vr]kj޶:qyFjÊ_ǓXaO太38N~54EP m!HNN+"P>III d1 Q/Ѡ3y %muIZqmb!D& UuǁKH6HIʣ/" ((x[4gU:q2> |>|aXO߱hfJ*vHW7_=7sثי(F$,hj͂KtI# aD}6Y cRkћizw|WE[zޣnvẃ1hn\x{bEQyj<>c[2yi ǣTg@TNn #T(McH.+$Je yFRXFdʆ lK9?"aIk62:+Y8Rf!qP@foer/kQ$8c6UH޹Ϯ'B|Yg[]"\c=+9A?٘T.QBYCM&lsjaYeb 4dbRmљv΃ϒͮyWBPlDV#L Zz^T% ߚn8W8@Q<oʺk/v}un kle,KNT҉J:QI'*D%tNTs':QI'*ӿtNT҉J:QI'*D%tNT҉J:QI'*D%tNT҉J:QI'*K_r=g3{=go HAg3{=g3{,,Q*D~cx)UZ2N_4)C$-6w?,-Eeg, ŨK^Vh,Dn/:rwemH(i7vƑ $:za#m_6bc1Ґtݿ~d)fIzV+wN-23֜īѝ~?EZo3eetd zN gas>s;])ֲ*rE!܌WCrn~mq23BMh:?E#u˅Vcy)~Xwڂ58Rh:<_/R7?n_x)wA#z_wPGe.CA8%%\R Bx 'R&8q9k6 5ؠxP-IQ"P<_xo,²Ϝd3U&##b;m^XbEm'f`E]m9צOZ}OD}JTyiv &ad' !4! b (^u.sU]xvU]ȌYI=w F JyJEySLAVY1Z]fK(z 2ː$F%eg Jc.$)D k9]Cf68o o[k>}{.Զ.BЧINv,:ALǫW'}_CǁEe(D" ݨ.m! Ee5ipaDcv u ߡ+A=m?©G +eSؠ& NLd2;;&kNNJ2NC)MQ %C@%&&`|,Q "tY"lks9,.@j$@-%xcaƛٓnaiĿlu |+-+6hmJ8] ᏎqKyAѪ[s֨rq*ofsGf뉐[E[T8W;_4bX*^쪣u}m|U|x;-j[Wg^]jә[]=t=Jhz.O˞)Oz-\y\j\ߊO޷ƿS\N3ha-<onP T&s1=XS0jJ{lmxyO*%#}Z(0F́+EqnF9@iGiѝ[D}Y|2$JBXD ']2Xt16Hc}FǥAiP@礔U:&MEJha,RD1EgPɬN :% (.f,g2fB !^^tٲIYDh8|HTg}c{Lpԯ9a<J?E"v X3CSbw͔du4сgmO.dWV4)'ٖ݊AKW:YPڙE l$OI=im'  hw)$akR0ֵxDdQQNm/TZDA u)P F% ҨBGYc.;6gw:':;yXi-meݻ[YS~ޟ..ca,M/!2X`%Uq^ _R֞=_RuȗbsO-l<j= &3:e<D%-A Ljxog^^]?^h(kA8:CpЄ,KNR6$X[|N>4_,T \ĵ"Ԇk :k0]Z #FHJ`d;qǹ:}H4Gi[[\,yAfQ1Ä"*)TL[/$d4Ň(26/_38O Ȗ8!H`̗ kQe%l^ۜT۞2RWNmPXM + YQj \H!1t.# $ v/rST~V,xޅŪQ?lydh(i?ꯦ&r7Zt^y3_nM~L8qNk p):xHS )Ŏ80]ZptGg ˳4;m!K%f:0mfrzEnuzJ6u4E~L+ Em_+޼9]^Ԁ,1[uKҩ5;Xn{/|ͧ?~3,Q$( iAw5}$6Gr>QvmZ ic̤ l~E]_"C' ,2#uタF:C*KuǂZ +XK]f)Cpv~jI$'uZf$A2kr, n|y9SY+snީOZG y',Ī\%Xig'UDCg[y[Y8_̦5@oMvطzapPm߼~S^pAgĂ#R?U#GET eLD1Q[\eVq<9#T`fleX=}?ޅ/T>G}~u= im~.M+tw3YC}]}eAn„." dĔ0k%GIetsdEDDY%$[9}M$Fr.(ሩr`vZ8HQ|=r%ճW_}n-~,fYS{j )4T,wW}I`}ô IttH)˿䟻;2Zz3CEhIva4J ɚ3 lP,8(u0 {zs{(}?FCݷ1/l]8l5r:arl}~olZj}&ώ+IL\d%P%c\KH۸ `xTAH@[@3D!M^蘥Z;YIF+cɽ,o:yb 6jMi8N^ (lI{c8!h1X Z|"d (MEC/x* "S盃<+TZ c]`$XQ^T<0*úz#5q’s KHT2- 3xq`E--̵ֻ{f>9L`BТI ae ZڽWlvtFm~̻η]0r[Zŀm;u)D_=|*\U=~4/HgA!Z"3YT)JA1'0#shq{X'NO̯=f%^w xN;'7Im&%bj蟗S2Ӥ3'$DQ%ib+u^ف!hN*`܂Wk\ϊe)Gx^u|Oϵ-.䇂/<(5.<8r{ӏVPQi(0TCYJy5ro[HkyP`vd,WF\(\y̎G>,*5tl]Ǿ`i9AL^zm8X8tƴ0]WZTRBĀjkX6oPX- p'dfSp@OJΡShԫV D 4$0 0QM R9Q1E QJ:&uy@MBmrw t1 4|ƃ+g1_"r?q/оZFpvvM-=X|RQe 6 9׎{ͷ=zHE`Dt2JK8V 1?BSM f/%%ȽNTX]UkY.)Қ Ia0c"$f ZJRAg Hgr$C2zV<dͽ秂˶8{BPGY͈Ͽ5\No0\( ׄ\jP{8bc%cT+25GX#}A'/i/I9H *D\^8d8rU:%fRhp[@L3 (ep%ѣ*;ygr͹l>%GS)g׷1p3VZDRIvwt:X=ߣ6qY}{j8fwt8zr5kq 1}@V  u͕+@hw G3=#2 1 D-yaEvW;vmUwk/{^k}?J3-[^=!M~uO2ǩIJ"皋MЙ6f&2Q0 KȌe ʨ YAdFO19 $$D (/ךPQQӇ\,MNޠ>y o|U{J4jG̈́I`Ii|!G'X2 jF[Z[c޳rM KB "nGNQHtTmXݒV& 팧BYYb/ uedܑgpOvfi4n4| fxd$-H<0^:Vtv*yLgze"icsU-HP=9sJ^aF BMMhU)0%v5rvKl?5TvT*KmKn)J -zV'†E2-}gHS1\L۱(*To$ iȩ!-:IĂ≛8YFHNڹ9a/sx28xDԕ%%b/(iO'*gږ:쯠2qŇݽ*?Ll=RLMy*q  HYҤB!@D@gk苄]35[Lۼ ":A&,ULHc޸hV䫱eYϴKS aH%cZ$?Ōg;#SxZtHseɾFE̋ϼxidh.: l]*[uN5 #g!Z'"=jh:tj(z'HWcavhX芽$JItJS>|vs\ie*w7Į:LDSN^O|vxkF4PG4!y |$ l{v#ݵywu6*t\gֳ)y>A,fO:dv^AL>kr(֞ B^3,S/gkU~-*+2?w/T4l}R*v|ȵPsvfأG&\eEM4DCixj A,`W1? NW %gztb#+xX誡}lx8]5Y]=IRLt]/ڹOiH~j~? .L[ _,~﻾s 1$` ֱş vŸ uΟ J?Վ]N~lKO»XH{t=J=վ]O$sGDWU LWO$c?6UX誡骡LWOtMWsyyrF{$b@Pһ~V`t]kk۔~MVxα3R1])D~:?g$dzU<ټk:i[.՛%Po{\&ﶺEl2Aڏ?o&>EyDH/yu#Syo"}1~-x3vGb{dYr9h]irlXѪzwDywy9P2ߜL^-'zW0misHeLN'_Gב׫ΦFNW0Rۆ9k5ޔI;?|2H[iۇ/>rorSR9aHt4Z_l-CbJ2bOP)k UIrj?mSbK .gӲ#sM}:b^oH]lhaQ/wƲBP`_UcUIfQ*GoO'ܻC0jp=grSc{Vo]ͭD~=XaAqrɲzlcl(͔)v)ۨ')W?]6I>/«y{H.Ogl|8ZʏWɾG ukuFK[|w}e6eW[ u ^;E V[`L*P׍UѾ_)ʙ[?;U֭Ͼ;:6j:4i!I[yc6ts:GT'E:ᚈT:qhuU8vyW$oL3A&fq@fqJ:u |$ܨ,[Whg,mY^tWϏ;2M* [н&3xɏ?l{͛<]dWι? ?nJ!(fhmjm3IBT&BI1Vtcإ[ X\LQUȵ:[65hIppNP;Y"%4'i _4i.$I+&Cd#<%} ڸ'SNOMlC1ΛDVXR(jM9QPJRUy'1 )~mQג']MDȝ@(\GYG(ڢ8|!50Zji01PR,@Ȋ-%#C|j{L)]L.R{Kl!b|OyqLm_QPAݪ› q+*p/Ύd! T?<9YMƝCLEm]LjHmE1+tmpe, {$%n) y1ρ 7턒 |*:8B֨51 jH)i*#m68ZЎ*!vH d z5r^LW`E]("8 ?{۸$B?{Z~I6 v&X`//u#^Iswjzm٦g D]]]b?910uo#at؜\6Zq`z"-|)t&JbLN00&mHp"= 2^)@]s4Y)A*VXA>7!MU Ыp P"*snJpe^D>]0B( 7% ?Lw^9ʹױm(`fJlS~,=x)51, D!".i5{$HZ($^ d4H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! rI XХH {C>{NH H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! rHdZQi5ͭnfp}P;m}$>( lހKY\7H+ijQRZ^tSn6ƒpaXmn(azc5iV*,^)f, hZ j(c}3i뇷[)C|14 %p/n}{gu^ڧ04i֦ SQOa­߾Z[W˕QkUَCUYX~ՇWi" 5,݇)(0{r6R|.lR= PJsuZ7Fo| nI€1dDP*U)%8l즀ySS)rZ=<'Z29b*JQ#Y7&r[>q}l+4_푺k7*kF]<{>KУP>m@fdNZ/*z URqTW/Q] B6&YNt g.d.>[uB W^{zp~V^co^}~tO>f@efoFi%Rkbz*yfƯNc_%=Fv;iZ8ͭq5JreJ*[J$-U^7Y*|2w!bٗa/KRE}.+P_˙{3d{O !SӼ4^f5a?GWO󣾜H$-B@gnnn}s'5]Ig=?hSdבSQܙr!p|Iq"OMU_/YZ~eqϥK@j7w%zov.`@t_[ }s,O;_|:wEw?Z/i%׽9No\^sϒC K+|{44_JO,/q<ۘr"G/]y,WJt< m4(T䙱N$*T$}4*8Hj ފ;VoBf@m|ƧמdkO]?K  OV!'iE2Z+9}M\FCwH!J_gX_jie"iOp0TxIi-CSS 6U^*8Hf[;!Hrۼ hIJX 9lh4\Qdttun۬+erGiT7D Fy =,~LoE ,듓z8TBt肎V;WYVpG7$?] Fr>y-x8S"kd2Yld0J('|O. Pib򿰶%'xh=u]ʉkD]\Ÿ^/ |=#(M:Z5X c1]gQ"cv+BhJ"F tT,fԁʙisX'N`HJ̉f핟 ߝ/N/^of[цũ!u,-H gM<uUd1ȭl ǍB5շ`L60>;oxO?ot)3ǯ񻟾ߠ)p'{mQٽ joY57b%]R=RﺫՒL Ώ_?7ߏ硆/_קg{\my>X*ڟ0OWpFh5(f=ilTaG f#AsM}{MD~[9`$'uyNaEE"RiM`Q,iEΩJ#Mn&cAllw:lqǡ;tΛW9Yv@9(]޵9gfRx}pzC*:7ΨUo%\2$Lz-ͩ@RWeCAuiq~^8hlL\h ;o3ݭpfi1]MlFBXvxV]i>/3vcu>H߼ܽt4U;N%!B`I(:$L[ FI=gBZEa`m8yg0Zl@[Be028gZ[s}o,1VSN`fjsti=-?Ņ6x8 $I,V|2^ЦJpIy3)=7&A=DÃd ѹSϘM:OJU~+syoyLZJ+_A! h?&Ύg?3BΡّ8} +l6脸즱%ܶRi;_lX_߳y^h2UA x>M/\Qpd1{[EM<(Cx<8wvsaxV׋/aumoӟ`?sB+{|Ƴů@+yfO:,.(K˟},0#,a$犸%.t$.Z1Ez6wpMsCi Gc * Qy "TFE+=11>dkz_J z[jK|6R4SCt'15aEqɍ^wK}l~~֟=qYm{H'7^\.,|PЇdXv]Ԇ/>;7+'.~%PB&.k:,M':I$Q]Ώ6xXwqs& + U}뭏MIgmŗ;BXpIRR e$ɔK&Z]Y(p[TP#^'e⶗bXjUeUԛ=afqts1&/.(YFm46ˍ?U.,efnR5\R5Ԓ]6~A~7otiO 7髢T1K$eB%M.]j[דq,B-Y}Z|4r lmo㵦];@j-Jg[>c_}%J{} @AhٱWղ]EY'Zt:;?MB?{InJcN-J.)Yټe㌷=;;{iӗewXƌ/]jVE}ȏfynϙǝ[Ts o32l}tv؛׽qQ+7eQ՗3^f٧m5-~Y9ÇŃ2rcf޽Kllh飭{t>m:1s^JW q '7y#V8[͇>V>C|?yU.i<5U<ݾTjВu3Tr$e It.1oR+,(<:Rh,|{4쉿3F&X%͙'A%SHD-LFxlW50aT MWhhI|s qm՘ivZR*~\RKTB,sc?,:%|:?Ks񊶤C!`Jmk}F߫_zXzFW=GCGC !{)M9B@im`uV)!YQAȾ)uKhB.NRQ.$H)D k9iUCf쓩>@]׮Acµ}T~x]Y_X&yKu.fAb:^=^5˵ lvq :mAX3bv`x6[?5읍Qlx~DYЌ=(bBm0Y.F) !x/%tʵM1VT{`cw7p255mzaŊE{(x>ڎ'/=2cGGBGΰ{b舗e^X4,KMDryz[CJ7I2DdQҷAQn% 9<B3A .ah<48Z1 ת7+.pV4LIB-1>&<3JM&g"92 fELFKNy²эٓtaǴgͧyw/ce]??[dVOmC{ iQ^5 Z>쎡C_] Ӎtg!Ǩ ׾ITnUUg7,ۯjl|Tޮ>-TW#]j㭎.[:n7=xٹs kW=O'١_Kݓ>5(7L=[5:.üQ}m9o.tyJ`ŧ3׉a;~V3Iܸ:Wtޙɏ'{PO s (G MeeoKJFmU\倧^M V =R3/ Q%ڜI Ɲ$(ؒOdٜzg9AIPFn_iJD !QNheH=h\$}aF*:R% `%nF8e\@1()׭ ȷ3̚ds7gMu={kxZ%|MT:_!?bKb=t7>N. *(TΦ`2x D@%kef=i t$c RSC6(EY,ƈ!%9Z;31Lo|+ )zW/@`Q$ނ(P6Y/eWؙ& ʄnB+ϕZ:fL2XijO.[%r}q2'_x4>89%\1٧iјXvց(M@`MF*}sEL9Xmrc#c79 1470ۣV n p~1 tϧ|/H|5<''[DCoRa`vP";VE޷E}_u _I'* j hB{gŶ#!-c[]\V !vukrJ@dcd`%U D| =^tb=b)xm*6^Y{b 2r"fޥABJ D#1(HPRn[FR)Q9(DrA,PdKPEqk%C+4&fxZtj%maƚywG{(@-۾vs ct@joҐ҄^/$ ud߬@*}P7x;R "*0ᠯj<*c砯RZo^J+$k{4檊k+\U)ts͕R1*9?U\g\hUrvsv*4\U+׾r,sUoHVD4W(_ *?OG<̣_bu8 /fw8Ag΄Y_?/ũ%?{h9L@8CwOneGdY`x⪣UiILW)f-iZ.c46 raNF-YrL< qr5-b~xt1~X_g,uz|U?j)wj|_&2fu;AKFUEnoUt QuGRϪZEy,jW*Uՠd\UjX\'\UiaUʵdnގHU}xvXZ -W)e3z y'>v9Y2Lͬ% ]-y"E_٣ߦkg"VJea<;1`ѫALƦlNWt*N㞡ľ#1`4?^]]O:[ME/}F}.Znz>2߸Sɥ{C J2eXȗ(}!gwۅyRMO/n6p6{3޶X'󝟨u~O:?QjUAΘO!V'Du~OFu~O:?Q'Du b:amo.˽ro.{ro.˽%lo.˽rkroR8^&/jx~QMk! tq40" C&5(r4 7<v3y4]_];&\X'dQ(`d'D%e!xWLvv[7kroD߯r_95}(벎\NBƺ)*U-APplA}t, ;%aO)z,!EgMu3d!kEuԲ }|\ānyE8N'^{ )[-Bi6 OJWO8`t\HU(JcQхAj`}Ťg)SBB%ɒt Dr ƕ,HDj;6o{ʅbsET():d1C,/ [|C*%ї* 7g5D||_UWg,ȟ&Ϸ/M4:޽kAHe΅8rZ?lӏO},Yz0H‰|/2-6;&?|# wm} 8J~ҧ)]U{l0Jiqm?i4jƖ]r ƓF[~;l9k[fXg3f';\!'~̣wxe1г6*#[]tՎ*k[X#aiͬ|;IdqA |TP9񯦲y<Û+Cu᧷?S?w(~ẉϼ]"E <@'M_woa-4hkءiAK]U%/k}ژ-^@zqq8keNo2gI}vMpeu= >Qo#(aP烘ٽTM _;hRv/6GhSQ$cDe΃DsZb4*4&"JC wSh,dI6,t^U/p omrC' # Z%P'o+6el UTDSbjR Τ&rT@ dZ_08a伢^ǟ# |P>NQv#XP+}O _b>%JQQl뭌޳Tp4K82 ǥ87AP !r$ )@-x[_Pr-^pŮp!}L c7_ѮNG'.̕.{p  %ZB[8Gb =[ FXl,}6q3Q0JIhF9B)A(0ǃ,AD  p`2Qf$Tv~ՔYB}|JDL^`7ue-,%F%/&p@8Kg)w-&FIUKjC^^h 8;?zi.IƠ`PwARD!$8%2yéxI9G^C;ޔ^o>FC(I(wqX;q'ymKY4?ېT.8Ij '6`vQ(X,HDF8Fsƽb3(PT (CJ::UH:,I%)QRA2ecTɽX;ai4ǟfnfXj݁%6WuHOȽXせң%hX!`"n!Z]D Ҕ&r%>)x&UUO5ϫ<דOII[(QbITkcZPsVUk"'uGy_.*hIq\^^i"C]3qE ~lTgd|J썖R:JR!krז ~.Dt#~jtdy=dG]M4DuӘF1UR'77 enoyT' M'׃V~Cժ,WvW$<*ߥ Ej7~{f ]wwwU |OA4<[vuڬ ۇXA[6Nm;!غu!\ZqwuZo!dMViL[QHn>a{26ح׵Ro1Jn:+Fc%t@|5z\8L}uh^/R[q)w>]Y֤[F0$jMkbٖ:W va-xdxXx1@m>{KeX&xV5D*:_640*zu_,Y#GdapyVn,6 I1Fw0dL|7G吀X00zF"q)tUHPgwX< = NAR"cVm Lk4Kѐ x{shB\,3jH]Ou42QǸָDp Z$O!n1qv t+tOMR%Yz٦9j}l 2~b3h]0 23UͤGKrC<8MN6$xӛQ*fCڜNBekͬ==$E=HvQytru|0:rl2\kJ=H)'ygLDJ .0e(K8E9pVBS9 B,T=zsb\tUK/g(A +HTȝ "锨ăCx =:guθWٱdeG|;ϫa[Fy;=+ ĎPO!DbݎEWQ1Bfcʱ *CmZFH1ܑyyX#vgh.ېӁH` D"1Hh,K(ÈJ\(-GpHPh*N)}ZL 5Ttai>;}b.6,6zJTT+oQ,lcTj.1YˉW{\Xx -jFU !9Gr4: B | Wq,tJ`U Jn)`*Қ85c9RLBQXB ixfdA8<=2Ѭ{&wXc{KLԲ Zd\;a!!T%+v:PMJTTcs0dcFs6T^31QSb`"|L5ta]LMa<]lv쫵ea-{#؍8E*nGrFIxX&8R0Tg <ʢ0 wZP@ 9x= uMK#(D ꨬ-]F.+ƳǾQֈ׈FL#ᜎdb 5^zg{0.췢-qPb Qqeb4%D*@8 $( Xz&e-ibϞ$i=.85UWP/K,6KՋ^Խ^FzlCc) ʴӸE"\pUTJkh0B:E GRa_g[x.W wcW)~|G h$'ӋTB;*)+!.%"X+"*&e}!K@{p}:(B "2BH(gqv>)kKDq-AMGzfqv`}(]2,n̫1U>@>Y՞ҞMu9 nss3#AR訽!f_I1@{i.ڒԳ "g;{wp(r._d Ѿ<[qVy6,XR/!8uQK]mo#7+ mT{e0૭Fr${E[/lKV-{j]$*UI)6Yęj[1fE4\c0ҳf2[]vⶠ9ygAh)hav: oo\<դY `6PJ܎ [5zKM-2^Ee, ^O+v4z4BQ-A]j)׀]GG1,px= ߷~26 ûE?V3e3޹i\լ`Vrt5i;KI?w>8MVYKۉ?:0:WxUyMmmwT0Kу w VH^ssEPrQ 3K{iš=ߪ|+Yw N< x#9s^0J!B\{ha;>yˆo,~2LUxۋ/Zpnt+Dh}ե+3>%1oȅvRJ7/ISwT:/aTH%k*9yQ|kA %h8\Sz͛Fbg*.}ɶ .s9kʟ=mܗy-@l-|&I?JJXnmJ9JmuIZZ0 -Y k@W"7yp3ߨndv8&Vo[mhkȘL,9sGTg~Y)&13[7W@YSjkS;[NTZMЙmܛ0d:"e%l}4hEh0msE3 ې0I?#/?D!IBB4̄ص%5]sU%U'f-g].{:נl&-& g;DId}t!мr,@[ޫ1fY9ĥHH4ѥhrST\I9Rm#ckH*)NS҄@C(ZFy. '\N.5-M˸h:\pq˅9!ؘDNZ#LH#DLZN(e"pqiǶxh[CZ<}1uӏtQ B Rc=l$ܮ Q #ȓ,RS32@F\} r9;CL!AaV>cVǡ ̾r0r^uµU fePk!v6:,0Z“fzb>EFe+RtHg'aIj.!XrERi0a&FZ`eZ"h-OldnP6;sM_vfzp*n5ΠZ@k8#E`e&HE3`nf#EJa#o0qRbo!c] hEDǜ"(ʑQ+Hem5; $h l5uyin͗. \"8b}.\3͟qd{4]ѦL& K5xkZL].1)*`Ď~@~AO".K , 1Fy-a-[c29rA/R M3$HȨKJJ8DV02lM  hC"=z2q8ϟ}SIXۛI\R/ne~$I#j Vf"d9x"*vKv/( $8gOgL_ J:T9#ٵ&3I %\^u~hWg= M1ڮ,W,UF4\R4sFLT5d(ɽZ^|D`y\/҇ K؄d 8xQMtKVDJHs|`!ͶXw6H!|!$9P H Ejywe.X=8[1,K5[Z|}t&ѢLȒ䅱NJ0DgBE*UnO3qv~/x,U B$Fh#4  wDHJbh g$(]A;眽d(8PLFۖL*T'FKP `G#!XL_$1iCN_G.kk8O$ʖ! g̗#qJpA ˙l^ۜ*m7ڝH[%ϴ>Yy2V kp 1g\2qzN/9&égz:!|>ֶ䀴* %RI:0m*!\>5ŧAipS\}OCbrCw*}]"Mf~נŏj02e4lT^9qԩ0;o(vKeɜ3tOJRIb_WLٻ6dWz`GŀqNllQyB_%)R!o&q(I7ZQ(#?)4?5"%++6l 9Wd[\a|+_ER ɑf @EP}Om=IJ}>Z&v?DuUZY*"r5 yT3Z:RƠ3CI6Le^^n/qx '[u7"9ΰѦAz2 7`-uIc2%Mei%Sz_*5&ԄZ\-r.Q: tY'I ؜!(+g<^god]T›~ A_ i/`rľă]ׄΘ{VRfvy($ԬRk?~ ,WxTt9 հ" u z/><-=8^eCoV8n0r"(; G)DYUY. ??bV *{/bإy{ƭ`σ0./^^!nA@#d44B~rрP9c?\`D$/h{'B[$$r#{,e,hJ7sHß 9MN-5CPCJX䑢vsYd`\; y51.DpCLDK9 Q Ƶ%;=@&ź|igi#Z+3<>WO}Ϭ iD͑ ^(#Kd‡wU s-J9RxlbmeޘwAGSڝGb{U0x*%-v &{~+_`>CE:̹`[s@r# $g\Kr1WD5fqM{~S!ޅcP Hy_]`.J'ktAy7]P{.?[7Tqx5dM$*xxx}]!֏Jz˪~dCn7Å Xhס5TP+-z1^;A,-}3ޛzetKF% e7%m3ݽEoDCDo@Wء%}!Xq)h㦎&V!KCR_ySwܝŰ5f}7~[5nVϮXWl{Vr2mydՑG1<$i.G\KPehxoo\ֆ$*ȦrH;{U[ӿ4~=&%]2M׾.%RxnpHT-~qyWt؈vnIBpB}IW-m#Ef=@old{٣hu Y4qrx')fnO[W) `_>2 pYQE\h}lPrt-j>2Knmo0l{ uOgoy8-e훛5^ؚ`ŃNmb tt"QaG?x](gҭ8u(OHE`c R7BȐa#Bd&厙[BJk0#JY݌DHQ3'%I`NT9;VU\ +7‭iU"bK(F[ :*]e5v6KUpjVمܪxM_y9*KsA?bɘz}VZzuYN_}&}s,ʺ\ź\NYriT` l]לvH:`ޮIW+@ tH$]IWfҺ}se%HAGG>&\ _~,U E1B,90K,c MEyp#<@.("1bK0e8p`I*ԒrO =1c_@! um} 44RWf21x7n_t= Z0BTgTΙ2(:zS0X<$EDA"6ʌ ;6`# F 5C;z A pg_vl5eƮ,g,>ygIH?;İDedvc'_߿ٍ[iiݸ\1싒3(pd688}VKLWK*]%zt m0lgvhtٖkn[wTDd8( 2h;ӮXBa1NF3VS (5=_NaSu!atg^]^JȩP9Q,2[nmƂ(0ݚt !N{TڥEm1Gf!{1NrIK36Y QBp,Pĥu&k ap))0qZ`b+K)µ+8zۻIpWv|)^$ݧGw=Zl&pκܨqcsb\K6<9ո)A XRX8(8[}_$@|aO Єoi33|OwD@4:&(fEwZvD0<{CcRNrJO`r{nAV*R 5B2GX{%xyg7׭:6Hbe,&T@wc(hMq*gQCRǀ VwݍQ+02JRF|d\Rq[3&2JzZ +@8Xf,yqWc0USgV?3xL?4^ hM袳oG. ʔ-:Jd^e|Z0›~EÔZD"v2$3cX!`)"B cWsXA^gl N[lB\:$UXͮ T70o=w^iAycq6E U>ڔBlQӿJ)^Q9MKXZiw& F?8'.U$b,[S:Y~+OƗ' feqe>߻.8palra!DIFJtD4ј7Y\,i} K3X?o]iz5Ӥ*ANDa)QT =Rq8rIa̒8‡8VA?TbSkUgA8sKX߁8:{;?~{z:?18 Cᰉ; ;0pO'6']׷AMIS6 ]hkoBW9WНOui-7gه+\ʦ -ݐ"p&g~B"su%W? k]R(r1L4e+[CA^/GĮƋ0he1"r5 y4aq@KC) cƙ!NlwFS{2//ϗ}89aF$8Z#K /QNnKgVGcA( oE,N+A/lmMl_Th˃m'`">֠S K|tKPh_ wH)9l|) ,ml6ڎeɑq+vdvKLլb,>R3 T^Xu%hx㪌QQOI>Yo?Y[ 6&i[ -m KӔ/ I&v/  fs魽T][/q}yQ_gr ̈%ۅ0OJrT&Qôd5?h6c#d/K$sg#Aв- C0[ ࡚MEGC.\ 8 Q*5"V/sV}6X{吠_r/`f.Rg-ך!&ji'7  F8BdiݕVSǭ]݇yܣ`,> u 'k<3l׏[BObOo vijKl~]ƷS V/ʨǶ p;,\nPFPbصKiJ{]3uJ/w\,QUV/* W,th!U G G_p=oMAyT\2- J$!)NK-G^УG@­?>^g?^zћ%PT.8Ij '6`ϵ:b!QY{uU0 qF(* (CJ::UH:,I%)QRALe 0$i HC§"=y{Zm;z2mTGj;xkQ6<X1YAE'LE7$G18BiJAh2<E1tQȷ%.wd?MQAq`Wf6lb z~[a^ԇآ}, )ҌS]u!kDcZB9ï32]m#%0P|RbXY|]J^+ofĒ'hCi.σjcK!'⺐,ۂ>cL|zCG촊NӀ-;1J!9;<6YvZ1Eя'i}spf _P %iՇ7 "g-#^'w/bQw!'$Lew߼ۦbdKg8/j$Yepa¹Y 3?u ?ȫ#@mmS3m8vm6{43=*{mh{}9-7 lh\P1~gv3M;|wdA+$4{ltd8L^OƟ.O ^ ZfO*iJI0 :<l#ѕ)$lK`(elW@8pH|.2IIHhNېӁH` D"1Hh,{U4Q#8Q$wAo!Z橏p ӂk T:RD;'js-`'9]N*?;CbzG]]iSf.o}3'In_)ӹjw=~f*ʛ(3p#&Z!KG,ur#BQjBH-,.:/@ygq,tJ`U Jn)`Җ;-c9_-,BQBo ~Qeq`v=,a=Mvz7{ǷËßb{KLԲ Zd\;a!!T%+q:PMJTbs0`Fsl6T^31D02j;-Ry,Vvjea-{{keHBIx\%P"1!Cu޷#GY&!N !g/WGϣB["EDm~`]H"AuTl~~!ƣcKǮQE\(ҡNGtdb 5^z=M޷-qPb Qqes5%D*8 $( Xv&ןe#i8c<;-yDQ;c,%E]..vqgCc) ʴ8E"\pUTJk0B:E2 GR.vXa1f`6`48*q7Msz6[ݤQi[$گ)v9y|ɯ~|u5ՐgT2ȒmR6u861kMh,`mG}|=? <<5 ~C޸pRrY/ ٶU`Wj,m{&m H\K֤^${,ݎ(+G+md[Hk;u'ڟ\F*;j)!_(ڀJPXK91~Lo_EgiE _J ȀTů̽F*xgTz WVDTu@D墶c- =]:zՑY; AD,Y"eEv>)kE$2o$qsᜳq-QrS\HkM.  4q0;c$:'-uڕN-/  `zw$U brli|9Vy4ߦrWonR]GjWlZm63<8C@ F_ tuayJs=:NV3™)}9bo9=5oG}˕@,\xq"Ō#MHQOL3Ub" 2}uq)vwwZ8|p3-\T_-${pUwŪþ- ) (A +вi"FLqND$tNO3@xnd\=uLx(=" zP.샄y f./*葄X Am~Am4oGGt)5y%ygrZ_y˝wkwWg}<[l2J/7!hm9bW 4xS߮7+lbbj~+t{ÁEնXe 6j-H$N;ōbG̏nX NfA`Q*ԠvZrq L74n15 fo6 evvޞt${&ײ6س]um纹Ul_S2cn]5 ܮTdh>qlɮGM;r>L5O7l~ icez76^GKe/u!A5?o96I˟h~ B} Q;uofRʟѴ)xOgLEp $JYe\S&>>S|}|~W|GGnb F,(#"=.K R{xiL")ɕ2΅H\"Iψ Es 2%tblBW+t=/;Tk-5׆rٽ+IJBiX XQQeR[0\pnTȲC/Ef9::_+tA:,c*їe@m֊URsWky &f6ǻ-=#םJKƑzw;}Fˇ=^ŎݮriS!q4@pIyAH, Ȅ 򚱜>UXyel`&,7Ik3}BBf5xBi4>8wW+Wa4ifscY>,6֫2/2J(D P+!:d+m&L_S5]>Uvbv, ;2##R<8.6?4dii|Ak eB(n g+3h i?%__pTdO:*cvyd~vַ&{lBL/"2aZ[ @wf>+FQXJ?"~F֔-,&dÛ6LoYBw'-'-'-'>1Y;/L9"ڇrlФM ý.Mr!p 㣿Һioܚk~ϭh#d'3Ǻ2llko]C#߫~V9>bJi> dLho`8d^%ǥW}QV؆\\BG 5+5y`g\p<0mIT:PWT6.* 5X3}1!VJsÝyZ b!bEoa(~7?vtQߎ~ڂW=$]0.GLU-eg?|\݊4i>J_!otZ}Fp쥶<ۉTFՀgpaƇFƨ+hxfDn,ŏtUsR'{'yQϠIɻo.V!9 `:G'wQ表"rG+JN1yvl;)O'[8ޡZ >vz/e]R:L\ Qfm-8IqEڥNctj 'Ab˝0+ Nq B`<P6 f1b69rLI[*L] SGU֤J?-$ݚ]zR4;.oQ UpiX+ O$Dc9$0DΉ9QY.x*l|Bπ<4E`YdeJf̳(a'J8 O)tI" MSꕑG kD! 3iW._%/g=BF}Qfз>ġ}z)ٍ)J|mtKø,HYSt(2H6,,bHsY 0t όi9k]0 fht2)f-G.ƇF7m~L5޿\lkZp}xuK#՗S]}+xgiJ_VOEA\e]J20ڮ$@St{7%"D'xuBud[E<}}*lvŀÏb3ݚM}ʒS[buku3Î{~ouѷ]z* ߢTwGRśt艬TK`~6=9Hݚ38V:P8ӭbS 7=aB67ʳZqaq OYZ-#v?oRwOs;?8YMg[5ܞwv[E܎EtmS{Nӵǟ:2Riw㉛& <=kyo6)oWbL-Nݎ֫]mO+[k5EcHH/U%HYFt#10Ԑ1/dKs3,W]|bJ xNm</:0\0:'e,M'(239ā`NZ'efуд Y3ѯk5۸^y>ҏ7m'㳐VgUglt kЃ!D""RUrт=x]&fb柯koVUd`6?JlC|SŷgB]| ])Wk~NRSso:$Hnm ug^"F#,e*o$*69!pлK!2\,2ŭ֜CRZ~YS_ؾۧ"LydߵR}8gLZnz"jn_U\|._~4WO-UEo[#:c1+Gόf6є,  V&r{={g^b^jK}qdKJ cBflUe١VYdK"OaUjR)Qܖ4R A>R&!:AdKZdT&p]Vk΁vBt ~E?NHȹ"&ӰŒ?B)IǨd֑*TVN+٫$q9@.&0y!&hPVl<%|kVM.[U'zȽvg6o_L_Xc4p*i Kp!x ǢDUUQ[s*OERbYG,IA RZs#c=R YƱX*cfwy[q϶b~\]-r?>O/ˆ=$|devJ#0d"?3h`7媈-ȒCqBɩNxX9uתgckDZh*#q@ݢցl:ЙsrtF#Nzvt!nRx, Y2BB]Qƴ'% 43%I' 4$mIsI#րhj~D>ѫysZf%⢭vw"tIV %0kMN( d Z瀋. VcUCw_&hȽ ~zG[︒Q:DpԏZۨoIG ?0yOG䆾CheGy4odM N :Йvjů.4ԫ6hhq7Ŗߍm NNxܽ.=w|Ͽ4.FqsSNKiaqitOW?h 2l\l^ b]aJRfz>6R2Մd}CĦ  }D{oGnYBg"UD=: %xg=0zg\XeQ{HĔ 2:YDsEQl5`Cm[p s$BjDK7:f'&xIc?j>F/yJ&ǁĿd숦lܔ:*R?UN%P4 *\TpM檬`XsX%/U)}/ߏnՙ_{ ]:ֺȵ+ueCl=') +S4I $ʿwmm7U!x㭓IIÉKD*$~ËH#҈dʲ=uhM +)5Z Ѿ"2ЫܦT믇iЧ}8Zާq0 t_3?ОSQ M>玻W?y7 G."eb}Gh4v=K0NJP{O@dןA!f,]?<ϧw2Unxa[Dxk=Ss7e?(y0&  喟мO];vڟ/;[^>yo.2 E)>K` KVYBlp|g OJ}%%AK^1ў3Ddpŧ>/3/❝ v&N}yt £kVe4}do3|;t홞^I`L3X+3񄝝ݨly{a *T0$¢}@9V|#P9n}s9VjLچ}9'8 FfHF+z"崜]^\Ҽu()Ix^&z 1E R:XNJ;2D2~˘AHåRY魡8Eĥ}r,MIћ8k}^ ۴RWR4B2Jt6kz7AAep/U&TvK0|Ԋ&u0CfB,j6cd*$6>.7\&Ģ 7:;-Oe V޺K;vb4%F۴ƒ5YIfRBҎD`+Jla:ZH 4tr]c[R(t=Z]~LB[ x38.e\ຩ*7yy"Ϳ{|`D25K;3HU8q||u) 1Ͷ"GBZ@><_ya.f-a+j0X7yL?"mXX6DFH:6\FE΃/):9}dJg)D*HBdYj@[/mkmgy:o43OȔ!cL} O [ VHjiăADA&kc2 &'/k0XAPe ~\d\$(Ōl~E,.ͭ )+<̔rjWP!F)dI 6j-(WqAbA\& th<^6ʀk&.-V+rj|i 3?U2r*x:hEy'rhQE-h{K!\ބux홰{O=(ܽx-UlʪYBK7ch If d˧Y]( EJ%+WAt҅C~u S\>(Yjؠ%^jY61HO!bb#P r+(Bt. $9,@2KZ>-"&c29rA Q)& 'c2c"ጏѪ *'gD S0۳ۛ8kFyjMuf/{gGS!?vWmիFh,.]NY0f7i0oa%(S %j &tI X+JTQ'Z osQAo`Af_JH\J'Njn\%-;'/qn_+q3)$+c~ɮJ s?jbBXmWTG#9zC.C믬מdY em0c9}B1SЏC%Smhou'Y *"Ge<@`1 @n- swGpN0}?z6Yě"}n,Ǘ\ЦE$ h3 y@`.yn:N4J`(j4J:`Ho$WY6 *q-flfcQktIO-,+@=o@/]6. `;S|wXbd(ƘaBJhi*=+%ӛ~?Ih*W)DYL#y؄Ml|aV UVHny̦͉IŽ ߅ԕ|;gʣB1CuV!fZxH sE8ͥ\8:Mug׌w}\ ~H'~a}\|c Q][NW ¬rWFz`)w  vK0i ,8~o4mqKX!C8dSN+Zc"8㏻sR'6kioKiV`=X.1ԁiVsqGpE J">z;G"}hS{kH{G^~jӽŏ^2f7uu9k1 sj,qA!m Fgiz`r{qѯɕ.\f+F\M0;Wc Gß.4{/4vruO']n줌n'Z1 G~,XvlW}pwͽ׎g̨Iaqq)K>8 FyP`4>׵F5U4'~4v4\C!ut_o;?~}}ݫ7˿tWcEWCGa H.n?O^NN EJ{QZvM *zq$h<&ZT\ͽp[br1n|dmU9`eYþ>ZF_d#f+< (-fHȚ%- LV(m"Jqt<([v6l]=ᰁp2_L'G dcIG BQH.@fsIMo:dL3q:8$R?Cosb2kctds+[}Nv-C`aU y$g x2kE\#JI+n{Zl_ ׌_EҞzp)qtxڅ[4.q4: 7}(5 t>_\o J]1/_י0s-u]u ONKdѱoՕ_t x7n6jCSm١=mK:L#!zbzBr@`dJ}hCWw箴c,ƒEA*@ SJ}aDʡ50`:xE 9=BPbADa&(WtIHBFEcn)C0¸V9<:Nhr˕|Mсo92 :Y"p4VjE !5א_CkH=f!5N_CkH !5א_CkH !5א_CN !5o02+>~XM|)O"+iևՒ4_cHiT\+ Bp\+ Bp\+ Bsp  f-YKn֒f-YKn֒f-YKn֒f,lYjd{, !{Q#Ȏ-#qHѝ6j Űavښw DɰY)Av3ChIwa4J ɚ39,+@Aern~ݔ<[G泍1}7[ЧZvⳣ0ݬl}^{vŸhcJc|203$W.hDTIADLq.cA4Bi\CU R  h #xƓVћ(I t^k!8z0)PC4 J`ty^ ɇͨխiGDtv'x [Hz4wG mM7F%'8A#9JcFPOOgtv~T=U<;#h=Ig äAGgJKa +薞mWqid*hAq=xxsU%'0޵4+翢e@7$Ev&,uC钴ut~}|Ȥ!) GlY@ uNRrF׹-\1!Xd ! !ݪ$`rB5{,h*:$ԛ@Qk6!J.u >/[*jToW'1zO;+p4 b E$zٖg*(hM:.>.gO+ʟ8,nZ2wwL؏KWMuj]Uqus-|x/"\n=\Y0ِ!UJ w*R}ZJ[]O~×=ֆ,~rf.&a٨ՓT捣 ĮMO0F]p%[H}g\S\}}-DGMԽC{Ea{j[q'בqPӹuG5 qv*{*_h^VG17^n8ë t@wZ㟻-s[M|H_iݦ6}A^:=fΐEXۛfVýs@m7؛BTZ7G$}$iZi?h0+m3ՖxWy>&~;Mh;3ÿJ k"h;;.t@Q+1Sr| E=4$6/ٺ4zJ`Nfh0'RK#RU'_ش<96ڊݍSW黻iA-5go}tnwI o%r,>ϜF]R&6qlַIke*IJ=J ZXW/A yJLyG`xyD1wDyˆb0iJ&,;T"uvZ);Fo{*sXͥ+j]i0EWD!Y'\ࡨ J.61#64S>f/zm7_ZB˞}l/|?o?BW|ՂU@N$0N#ʤK)$ ڢUbGyC|i:笩H+lrNA >d)9S%SޱzttfCJI85Eu=-c5ROVI[ rٹ~DvVAK5ρ>pz bg>҃B/+ڊѶJ= s8TAR1~ QmݭZ7d:i}tnw-X6=o?j6W3Vs-Ó~gy;?3[^fKo|^4c)h;1#*ò6͍Ԡl2dA3EIu!b!Ϭ8TbdܫLF75pLdio;cYTB&PaQ *] z~ #E.ֵаkh77{T,@0<߄N$*&Me %Td($S$:AAQ..$kmt$S2 p:P\T w){!f HXhR"@)#llPsx Yt3y4ff35wsRG[j- 8D6,BJDC1v"b,5ՌښR Ah4mmb,`W 2l)MNl*@ +֚85c;L6Bj qԅg]|ȸ' zuHiy_5dr8xBC, &V6H' `9`f;3hBXWjleUA~ I TMM4&#]bibT$N k Xc7ӴebEk7jmXkQkv/NFy0S1fb1*.`&klT*k[v"B,kNeUSٰIYD-Q3pd %٨8ҨߐT8j~MchF8jցtf(l钌::<ۓ U"X:$Qm- m  dPgB bY/I#[ҕbXZF|U֋MҐ|fɩz6֋vԋ^[́ diME(RFlfGl0bSPY9ŵ^>l6;NՇ>t3ݟ s7F?^()l$/g/?;E B/e/Qкq 16-R(@!x/%3)FgQ,BP`WH6Ky8M9*Kz9*2c9߰H"a6}X q (5("|E|7_y1HJʨh!c(hޥABJ D#QT(Z6CRr>'QBѹ HQ\,2F"[g(簯+ށwsGPpqytVW r څ,5dӞ8M ~K4!G:3(4C:+;,|Gs&r$?AWFJ )<GБ+sRfv.6A =$LTT'ZbUoG-xG:Si:|BSs:[ݺϻS*HZZ%ȦCc1USf İ0Cg $Kpwo^4s¢"M $* J<kTI_P[0RIYR[^(KADa NAL Jʕ?b2k9 b3q`|Wo6>m)FZ^WIn#4-D-ڃ(7{箏deALdm<WȢ6S7R;5 3% bXYFw[C[ 4/45w./ڛb:|MӖ؀.`Jm;k&OQǜs;; A:L:j6YARά,RB*1hT(oKTt l6@8:vA*Jp=R QXkGRUiMJ#WzyHtt\9 .1jU>>M(G\'5퇱!ČxBx YO^۶J0th;3ÿZ*9YGU S ;J̔Ht& 'w8  RRQӫTIXP׊d*HVYm$ʠk(e#J÷\ؐ/*T.(*MB36H?,=|E}/ uS?Sݿv7WP뻻Q;[QpXTu!k")֠^QC\̂^lM1Cǔ"J$V:`a$^HJjl%&!Kd>n&E1;?u1ձ)m_ͭF ; ݦS/D}l=&Ͳ4Q[F2V)pH#-f&kn"P9Cgj{ȑ_i;-|Q$$'9bwˑ_ZeʖLGM٬OUp'h~ﺹf-ñ|*<ȘnLo3ݝy6泻C<~|vKH^R5Rv$0Yί&]WגڮHS)S}8 )s^KU^|T< 7l :̬\Ɛ5 \ȝx}xNj6=vc133ٹ h51fNVgbAI.wf3.J_@Z?~NNpn{{UoM~Ίۈ6&4& KŬ1L0$ɕK&Z.J`rgqF2 s&aaJ3_.=3"Gyt,l0L*L>8`}yVƺH# y 0-N8+[6셑k &gh,jU6х%RȁQiI&9o6}G P.Ej[v~԰#ėjxQ.B,'W_8 @k1Mv)CѣqNK3 Bid'C9`.{e#?Ƈ[=ݎ}(~$8|Jg&riPJ1J/udMG=-IkJtfC4*' om=N:lll9O8pw.k9O-'nG5<}?ϴpˋϯHw'!~5o)yq6}\{ Vצ]9S_ڛzsީlm pУöu"e/gTԷ=owtFm~X֪e7M̲Siˍf{öGϝ. QË1zi|E)&i.H%HCE΃gR&^A*ႌ9C s=z[8͗'J[d9~nCiXhR"VsBf:g8̉T W$MVT Yot[ܸf{76;JA)*“ǪSN,Εٟ4&qx$ OE˥GsСI^7^ԠqRƂKesG:-N4?s/^zDDf"ɐ؊Q圃W1D\ȁț! ]lu0 ؔG$bN3ZA.{g$,Al)Oɪ? VOm rٵ>r\vo]; }/;W_>{OCӋGȭucE"WRI rx7e־ҺZWoap41n ~?7۹yϷ{g{,Ws;:.6Gx-2oM-9lNUߚ_-uRѡX蛖hmnepddtv2ҭ~Ge,-Y(@WHh7yЁ;Md'WdP%v&ںR-I -/˹Gg~Yǁ1l4٫}JZa*,=JvEk.B4AgRۘdDt,yX&)[1*!=i˙Ui@!l.BBOJBBKa 2z 8;0,]) W'}۰Ub&*"Ԩi6V&s2'ij`AH/ FVEFet1{όtH\$JqFyt)($L UqjXXmf2∅G?r]]f<] rxNAR2!Y l@gDvW&66WElaDFbș ^`S(zQsI+5e^ [Cʄ2bWg7b,nfǡ*Q{d[yq4 Ye[SdũHSv,aJ$ 9xH&&OEđ0>p>fY :i*a5qvakԯO@`<[k~2"GDZOKk4Q9'sMEPOc0P1E@cJ#I5\pT;0TR@*iB K#i ^*'#b5qv#NjD..בK6KES͈#.n=0m{n!Ŗ )IkcIV h^$se_爋=. V⡭0<<U ~_T 1ժ~KOn5_cYHy v2!:cD&k’&1 (}>(=)BOV?/ĉ 9TCDZļw #%$l (>lb o[3QĀ!;o|ZQ P`7gkuȤ=>if^˩n`muI~9"~|AVto~sq˰#1q4vDr>MkrG2zrKru9y$s*ޠI@T 6b~sYSXʃ3Jq>8)u^l'mHǮ^PKkR2D'5E : Ix1&Abє=H%F)c@. C38w_/4(d:/@|HɊڕ5# 1o G߽zݻlq6*6ϸeO4>Xхu˃΁ S5-&;{|[,כ[8n\k7SzÑMM ?Rz|P>e04G{Iah?tzkgC›d7]o@-4 ŧ' ]~,mix]UIf#Ol8oӊL1A*0[=kש[ܐo;oݛ?܈>hY}PhŸ-_C7vk]nokA>/dУfHimNοYWOi!w}Z_tLY]/f|LIszI&؛P;>&o'4'׋QˋIOoOxV3u|ժ,M ӢL^T0CU 7F4o4;2,9= }*7zgw֣2 -JJ |Ftr6t9R?PA3q9@ ) !]֥Ȳ8 mFPBl9Jp4vjvESʧ9i9OH_?"8Zj/d! ]Ug˳KA&wJv"K L1~3i5R#eg13ر*IdgJ ]C9VjLZ|9'xJCT,dmPFY~dUX !ts O1m X6|K̇13ҴbŔDw6}g7 DHAB0X: E i3Oq{: Y-|gÆA|љR !"RGA$w9MXё&UB[Bc* <$If%y4|8{š^/j> )lX߳"e'-^?|-˥9JMsǽ5N 8)J)rRڑq%JT<񨬮t)hRkp), Pxczgxlbj">j 9Zk&ΞoSSkfB@4ZFf͸C3H0Ȣ Ȥʎy/1٤oPYCC]3HBbNjN{u j2}|ps]R;oN:-)֫vb4O0Fc ,$3o"YrdZH 40yԱұE %^B׾j]~Cl8Fw ɡl/2,oɇ54Ӫ%Z^( C# QHՐRS_ѯ[Wz vN<C` 8 Vw8s:]܉ֹȴ^ib84=T^6+Y em0.}S/o^BuB ո:OjbC<D%-A H1}Q4=la^X~~~6Zniy-f!8hB%ύuR' F %숦Tq2  l8,T \ĵ"Ԇ6tE 9R,Q ~g4-yiÁQ1Ä"Hhi*u=+1/wmmH2;s$4и*^3IUdaǥbL I_Ë(K$J>Dx@hIh@ӋQv8pf`Dʖ8!H`̗R tJ-0ϑٔ91Fqk}6+< N/3HZgbFp!й&.s{s.ΙιZoI.‡ N.,,8(^(i?[㵦[ NJ2Iן0қzs;Ni5\5p8ֲM>B/8~z3kUƶĤ|&%'! P 2;z1ddJ( l3ζ䐸*ƒwTt:0m*.֭N>O ᴐ-ih>֩KµBmѾf=99_\K(ڀ,w$cx<LNhE7K:'JUQ]ӼuoּDٓ//g^*fbs?̃ {+Ƒp4r_-w? cFҸv$tnv r06mLQ+h*S|4|Y,jyarͣ.&n5s(mN;-HX}4Irc8hf7:ՠ7t'~uB/$N߼z돧/߼=nN_ϫӷ?/ f $Nun W>4ZCxPse\]skq۳y)gɳTs]M?Uv*!QF+:$\8*F 5ڳ* [eεgRo>u|;ۯ6bC%^p~7 o]83b#.|yt|4l>ũ65n2PԠAKdJ>G$iu.}#8DϹM$)4@喷[F1T^9 V9r/R(L@NYDUnwȓ3"LfhVg]h~rHɈz$97Nv{態^9]NfmVÇT~xzIeAn.!# d{)aJx edEDDY%$[9}M$F@pr`vZlY x4uAgOvu7`Yn|$dʎb.Oh!WRvP+š-T*ٕˎJ\_1芌] + sVn'W+ˈ cD&zO__7MgM E:;h@;A{"CdFXf J~Bb .rTt͡Be'O1+1}9fޛaKܰQk!BT떎]0΢OFj>jfks@iz <Ąi贌9k hm^][oqA'59~c ys>/*:`$%7(+d Fh 9 ,NaUNɥdo= b:`p;+%bIjxq&Δ7t́&{е/lF'ƣ⭭wq.|LySݚĶ얮].m~6ΟLF&!I WpKȠC.tZ7ԺiZos8͖^}y5eC-9U= O7s ϓ[R+{-ϽH_eĹ?+ʬ7$^b&^;h#q!cw!nΤC{tuG2%*ťgl 2A{@l> YZZlQ4^!A)d?쀪GVk6I 0|11HF¯"+em {uȾFdG㇉ӧ;ޥHsE&L6f&2Q0 KȌe R E; ېd6-+&6 -1f&ԎEޏjRKk!'{XJj8a=ٷ I*v2)t,8)<$ Z@ڱ;ilSF*c"RD`ѥhrFS,DIHՖ.*da5Wʲ;Yx7gzY45)gw4@OL?kи`iПLx S$aRESc"镉` *U%0"#-C{.r&/5Hl B0jn#QciMMhU)0%v5rKl?y(Rw+Ue:iK'L}L,lZ,g`P}`ݎETUy.x# !C(:IĂ≛Hq$ER1@:iW;p5r/x0zl-W"Qw8F)Oby@YDS 'lVDs3>P$%M9ҎՀܕ,C%Dړ\,"YK N.nxc{n-3RhR#Fk&,bU"q+:;8"qǾVv?yx"bxp?0MAp]#E?e%n"#ǣ yk>ofmDōh"$ !ƕʍ̱L(PAg dЂ= hYl}^PGD0B* N@+*>ޠmfʀTXׅX!H➰++h!D`9⪐h9=Ѭ$F9.VI  ˊ=pEfk=.4K?\WmxIPyzШMsǽ5N 8)J)rRD|SFPY]׽B.p΢@Mo 76w/L*&.&Gm!GbmXc|o SwS4BRel֌;Ho>,J<22餲c^zeٙM|Cf!BiL 5&pm-#Sduv.[t@jܺʽUMIڝFǒ5YIfB҉DV%1Fk!1ld;=6N8;1w}ٙVuV1 n0u1j{K(q4m>GkX!|C*j|3k].D?i';⼞B{%kdѮ  fr^s4GҴ+M aɍO|4) oY3ϣsԜ_Nj"am ѻ*t55/z۰e_ތW7BǻkisTe](sa[^r*Y%]G@}֌<ޗ`phbZqbCyVD-B)Ax;O(Oh;r"  Dά~"\FEI0ƃ/I>2][x"zb$dY^a],KKCTƬe,[kdVk_Ev+R萲 ~޵S8fByCF y<'gO.}j~?{WƑ@_ވRf w ] 8U"$%[9!EJDQCGlz#^Ȏ@ṵWT  VBJ8\M_K%V/oL6 H,Pѥ˜=_ tOѩϰbR?AdĩcD@;k(B$Npu2'-~ne %p큡- "qڅD+DVHu@uh1-eY_'ׇ|ݕO<{I!ɖ aVh.m =v?㫳|`?N 7 F)C)B *BBÙ\4QB`?ޠ] fPo5E ň M_%'mBF;xcڔȲxjWg=އަAs6eBVJ{u%hR\dJHJ'+04C@s8W?OP$SʭH ' k 57Ռ<[Wc$7YܳA{#xwE{oM{p_L'xC<{y7o?(0Uv:np}<8S6- W%olD鸞\ɸZqdY&Rq?P TNu1ZT_Mz7?*AhNcU+rݩci&f1I%gFUbeNMnɏՃwˊb7lqa?i?=ז])/*bqq=i$GiF4 8b07mQ%p*S|4\zz3f?grmGOrӨ檈J/F8#y`/"=Ũc;*T"O^w֏ !;~>ɿ~Ǔ?POÏoW`V%Ao"=_D@#w_X[Cx󡹆-ۚuیڜqZ3RWl/jqakZ͋qTh95"g+a(+H6n:?-|EEnQE7/B%B4Ӆ٬TJx;>{SPdվ>Z&e#ѽHk%咰H`Nj$**p&Fre^kw6s_6E^Y?pѧތJ\.pUP_(p2ϪpgI)(7UO4.LJR5رiO|07^ۮU]΋hJСV^Joxb#9RA8)-tg‰ZYp6l)h d( lM,)&#\`c 18+ 82Q1xΒj})󰚄2q2:zMy[Jټ7Y kj"i|Svxt+#,*D0R q2MԀBV9]BJx6Ay[ټ%ݙ5.z~[0x.-vIz~7/d0A@юBZ Ѐr6R vSAsebI(Pz0s|<`G *tt:S.1epv*KT@+űkiwd +6/GȂ.7~V!Au7ŝ~lG<@R"O7orx~ƀCg먰9+-oUniWm #jG8>.k'}e5j&d|1G4UOo@.ŷA!M A} @Ī|XUYxKے c˗s܊+×@(W_ {&ׂG|TJ!8G_tW# w$KԸ(J^,/Y"Fyt]|?rn{}/״AD9]T"htNLp5^oiw(mlҬ/qt?}w.6ow_.[l{M_KJEP" !I52`F Cn ͂LIH w / 3BQ(GM@ .PGXT289B)eO*yNy څ7DI!V5yvJv`i7POf \|79I1KHxB.+CtЂJS Bȍw%3B'z6N|q*$&SxҖk"$D-%%~X X\,wɓuOoʽ%{awvD%!DM4M0 PB06;]ޮ?Hh߃f]U-6m6 =Dԣ-{@lD/!nӑ?/\s FSCD J(Xϥ"^$]4{||Oe vݱ5f,f ҫdڋvKFM.^&Ԙ ZOrʋILSR05g+z4G}w[-~W8.ݱ]ܼޭ$bcۚ#:֦$%]L)ρqX/Pj&:iUM=v0>@oyhK7Nu%硫QJ rkC7D׈0f}ޮ*^;4[ C-BlŨNeSߒ/^ZW)kT57u٢FϭPNi:== Ann8TP1]N3wF5lv`63MhR&y}b?NlLxדԴ)/[ӔLQ;3 |CQcv.~wl6>jt]5 LK7,Pa^? mަ|VMk6ӪlF=fGNW*KF}&[f[e:wMU/ham I1Fw0dLh*!8B,Ă1ݶri>= E"kFwwJ($y1⦖d44Q1N8"z}:- 5}9 [ ^V0&ne(e̡I  I:"+z,rTB%W@Kd:'by)JZMUNF\ m9POm`eJ-tH LgK< .K}_Ye˖-gmSf ȜIpY1(0*V&Ybd+2:WerAOZESRWXɀ~ƂOks w5ܹ;p\Ýks w5ܹlW)mK }{:%SŠ&?/hX ! ZFTY(l ZRAZs^mNz!ş diNuSx\u7u:׾B^ G삠FCH$ Z%̣XN..AKaYwV2 Lw_Sk!|E"| >mh$$/=V˖-2egz)6]cE1IP>Yc`I+L\yP :8:?O;2ahI/A$-OZEo&y/tysu@PԆ}`W}>( (FjX/ABDn<2HN>rF2 s&aaJ3_.=3"7'' cX8aT|p41((Pi)uA` c'z]?^ne W1 L'0YY.ժ6х`{xl3}fcgc; ֒jxan)B,o_Zx1!i7B_opFD<<{:X{ ~4gj~{i-_6$/_&痫k1̿ܽ-[󳰈gZ:闊uyU푔Gp8]qf~+)~.Y5Q}3AnzuGO5om2pyϦMU;z{=Obp[()"ԝ7soj5m}~;Q'疉D% q }4m/{B{_~V#^^Kxd!qSqey\/zLs=|S^M\=$KC}NJ:s;/n4l31{ \-.iݺx%e/gM\ؠn~g2cnrV-dbE7ڨ )M;uM*[J}Y@:V(gt,A.rbF_EVq` U*{d;N?A߉Xȹ"Dt&LFL'ȒB$SX&)[1*!MLJȴ*8 ېd6~P~!'&EKa 2z r3:aYXNC WAn6g6{}l5jG̈́I`Ii|!G'X2 FUQc3#&,E+JqFyt)($L U-g?2֓Uaa58 ee, Ork]]ɸ'i}@fVe4n:^.W8bIep|Z~cH<"9ZHkOqh7~aLcɷM]Av)N:{O4dZ!fӦhd l^db\'ǟS/<$Mh|zqԫ7Ť5ǿzڛ3>'/~H1mh<_ AB;sc1 "O5^aD11hCc{]cn?Y('12H1>DTxqhDy-59.ԖT:9}d$tzďT>LȲ1Ma],Km3ʘ `K*ikõLE-g! g}|?#Ӈ(E8[/d^6|U3ōFH1~3 lKEUg±+ة * 3%$g-S%c R$!*9Oem5G)HC!xHe;˦ݼVW深+YnQ|#S(q]Al*ѿeANl n0 80mvd>8's~oU {+<, bi}L|Q)& (3,. Tnwȓ3"$3\f>Rw[<+󵈩-_6Wnh :xOgKc\)Ya%>E"mp&֠㧑䝓)j v F*%[nh$Jzb_Y>PL]Xclb>EMxyZ`M48LD,e%Q[`Ni6lwyÎZK-Ym(31`3(,+5Y%+%\XZRk:h.2<Ǻl6:3ZWZG !1ӚH"j9zYcp1[#]2Гgs}9ˇuH|;KSRJx{ k @pS S头#eNAHåRY魡8EKIQ[ѲXxUh |, Sw) i2Jt6kz7IW1Ȣ Ȥʎy}/IؤoQ(}l OB&n(HBbNjPso)2ÍeStH{hX- wX" QyZ5GkX!|C"jH gֺ)VzW'o=XYu,z(fQ7%}:] A{/Ra;JZo#iqUqyR[oYJJlBl̨Ki0$i_c mܬjf=j-߅UHh5IC<D%-AHbRGQ=-ױGz^˽5:234,MȂ乱NY:!(Is.`HocfbZ%96\` l,`.#$%dWˏkO"YZ ł!Q1Ä"Hth*u=+1&!M/>D9ېoOo8O -qC/iG4[`#){msbRqoGFHtq#}_Y+9M@s.s?< %7Ֆ\KNfV/mʲ6XZpI/* P JV jT^F( %U :ҹ[疓fL2Oy3Gl+r%N7?y}৐Ψ>SS 4.&k xy3~<ɟc6F RR[lfxfJl6[GN99 >8 xBFH_{khO&؟LI/-{[̑*}y)D,()-uq=.qGjan[PM]_E,:9~̮ZD+nzjlzyu''B4Ť_nuU3}~Y '9[7&Ѿi~u6zwfz>c:ukoQu1|>k[over$G?Ld~ƽQM%YX%]n*Fn,ƃX̪Ah\ "ǣܖ9NKRTꆺZuR'<c d>_7e goF|#_j&~}p4{;+Wd_{O߽⟯y7^޽}Cw/G0$e迼aQh*oQrEѳt)~MM]ry˭Ron~f4V~-OhhɥGBttoneν$w6z_[*/ή5m)A-˼<{|Dܦ_Ժ}D-b z評*W ]L t-\ t+@!+>j=@ԺRgJ^hB&Bثlgg-ݏ;akgLg׬xtu:\XhƠcU,?Pχ l=L+kTPQQ~UD}yT@WEr$ҨUU|D #xqզE_]!)򑸢,Yj>4Wb$F`ZmOFwN01RN',Jt3wB##xot_;P\vb(~֮~ #;d3b al{_x *?q[e/)=AOX#V!7l15K>[i4{O{{@ٸW,ك8{/a)UrxucDv "4ZgRxQm-{2hoy1wP{o!}C젇ڳI> Loz(Eq6 Z7ԁ(>8ҷ}H︌%jUG) C|M~^]o- Og:bz(<}:)RCht2:3Q}Rv):䊀劭]W%Ti ]a'W'(W"&%W+ƕL0m+4+gO*W qJEVS;+>dоqyFZĒ)Wur\-kŊF͑s\jFGjoFiZ6ȕj׮GƗ\R\1:v"J\NNG [\1S:m+JWVwkrmc5di[w}w 5#~v8+ײk"ݾ5EƴFQcJ;G5|\Br ڣ@ S+5cJ::A``@[tb p t_ ݸݮAv+ngPkd`E{tJXy;6urVCYJFC%B[9I񕗘O0^>$ @kיStZVqS Ҭ8eϫԸE\T>{7딞bUgIvjVFZgOmitXW“jY( k92ΓWTb6}'Oو҉n)lV(cS+ ͈3n:3LkZ0) cdJ Cإ]ɬcgZVL\\y4 {HFVTiUA:Fخ (ޠ!4}mŸx!ʒ6ȕNvz%$W$#W.bZ.WLE'W'(W7&!`dqU2jvbJyW(W )le:rŸ\1m+t+aJ{+ƕ2bZtm+lYN#Wkֽ2Г Z OaLzK^'$l@t_2Md$D-]ic̫+6;^35J8HJ2*d1k0{vwU\>b\ZvbJ\\ы¥$^-ǟj쏽Ψ=!>wՌҵL\A'Wv=3jHFl*rŴ(.WLM'W'(W\1KG$z!V^]'W'(W< .!b`N0ȸS+GJސRa'W(WV ]&gʤXOzVfkV%N8[R>zxfC*=R:Ij"y062OqOS#Z#ZX2%t)zji6@D&@4}nܴ߮A=D4ch{hmL)ʴ1r[˓?5:ꅨ[>A]ݮr5e]/'˲Q'[tL - eB 'w ָS: ʕr+>f^"WD^.WDӑ+t'W\B\rEptru:redOғj7[5ڝJnt[։nzV%M< ccg|lFkZ$)t z#jdpA$}\1m+\94ӑ+-xLHXt Wa2B(QNNPR+/@=7U#\HǻbZ/.WD)e ~>rz? k|tj{eh<\5Ė@ȕj׮alBrG?mD+h\1%B'W'(W(4 vZ\17m+\}|ȕ%OHXtAu6"ISNNQaO+ yW0"Z+lqt\D49Rw7@8@M}sU -"!f`ΜzL9֩(Qv2}2m]Yݚ|jVNj{fSJ֠^u ZڝաIw3|;zY!r:vFbR$0.&3Ĵֵ=vaJc+5\\1JFV^NNQ(S)d`<µ""Z+TS{Z|V2 NVdk~^\pIfsUNU''iy>0{o޼oP_ds.%߯jgK]UlYc{ AWëVEg54?uIoqO"VY~ qWT7XZ1˺jKDڈN?n(]nQS+kV ^1+ eqvsiLj=U~6 ƚ+?j)[YH [{f0i8$Y }pY?GnN\ЉKF{??Bۉk"'⇳qy}#l~'fPKt+svUT5Y)<P(ѕq9rsKMo?NHCEw:N޿<2zT uo8 .FV9|~@e誂 _\+S ShȅVKwȕک| e.*|aR)A6yQY%+Eʊ`rw656h4bLX@!XԠN8(l,@'@Hu{PH&C@U+cG=D &23.}$ %_nݠłTM%1,4WU)*K;,'Q_r{0ggAuP<>m:@6ޕ&*fFVW 2r i(yG a7XP|Ѥ`QBą򠷒!HDNkk*V>\Ù.͕t/1 JWLd:[nmG⭐C@8oU]TBNu~d}:MOU;:@vTVB]+!Hb!w5 cˍ~:?7]D]eb:\ySx6C@FD4=R\}ȋI TɣB- ā 0H H jcYۚBΨh Bؚbut  R(VWC+Pܤe$+oVQȡ7tFL")يG+ca7XdЙ$ fZ&kʐ 6"QxTDiz3J+eY`+nZ닱z׋[Vu+7,5+H/eAeP1Ng_|X~YJetC/,a7#/|/{qגn%4IsG/߯V?,oֵ_,ZX>L_]7_\]q7W/_jNէ %__pvVW|~g3N?9Niܜn,?|~{zaHLST\O)~s?ݜ@^Y;G.\wF_;s(NctE'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qx@1Z?7.0@`7fZ @:B'P%N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@': ¬ Q pxN F8 dmP'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q&`NN $9\;'В?x'P(NctkR'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qx@cQ>/^VSM{}ni{_ۋ#L !d\_<h2q h9x ƥc0.7)hFtјå,mPNW2& ]I$NWj/F=ICWt+ԩȦ ]1\Cs+FKQz+tute͈p\rQW6C+t ]!]d3+66͆7s+F^]5R #]Q*Ω`re*b!΅֨߻bVһja`<. %ۯ/L6=&wxvuGbhܩ._rՆ<|-5Ok<S/~ӟ~އNG]onTm}q/ZYY{oVs/^uf ŋw}W=A,wZի߫SK~zfooܬam;ꇳ7oMm|ݴݬy\_}/^p}HC-eI9yn<K>P*V Qtfv)f&&ef&2MP8F5vNtŀg F%st(]:B'ة";pl0Ͻxb! ]!]%63+S 2l-U_ 2J~3t8Di? pj?B~5EWnrBW:|,3+>XݫÍj.tVC+Fi fDWm2f6tp ]1ZwQ$tute~N u+\t(:B"CO4]p|QړA'+~_"/*@]~˯\]p@|#~zuy|Vo_[(`[YZro7/{uqoGBk #aɏԕ’u_~d?.M7mw?nx4^͟Ow't~{P~yߟc~L~ñŽ/Woſ{.Vz]/>/_'^,>F_[[]2k$wN{om]&Z&  m~7e,yq~lATrufج3,^Oߵ69e2^d5sl7wn-^k-VBw.2\)=sFN9XBSO  ;,wH28_fhM8t(ɉD=Bʆ; wJ_R>ew:ﷷz/+DKGX./B75;;i|s>>]-rk ȇ;/cl =rݵ &95T<0,ɪLѲh0Pvd&&Ir7nt7/Ҳ02%[jOpT }G.^c]%r~Fţl>3t(:B64^+φns+53(IZCgDW 8͇d[aBWs+'IϿw`랻fuLg\텒t兮>u5yKiFth6tpttŽ[':23+CWKJͅ}OWҕul]Y>3&̆(bQ&%tutE%fDW2v6tp̅s522 ]#]9֫?ڧ5[4>O NkbtszI45iFKOӌ M%M{h.=@f!b@9w=J2tpW_ \t5SrÏ2Ww㖊C%GܦpT1㸃4gD07n:١B !- KûxFn^O⋝e5}YN×9ˢvSI< Zv_8}J>09|c_вϖT}>@hz5%9H0NG>yeu,|SչG_ZdtGfgd?t^d/釒1r*TD`-8n5DQL:)/(%~a7†$jt0zjUHMϯכ~O]Ҕ7]6 ˙uCl߃o#&e1\[t ׍̾e{휌ATSfb9&BG XR΂e̙_65W) 1)CepuPJOyn=$\.%6!`BLB1jve?aGF2Gf' EkhdcMgכ@4K`ޠyclt:-x|br'Y|Y T]o|N8:P[׹Tn*Rrnҗfg òBLaҙhY!Wc/oW#5ay}H|r|&ϳdTBK3c ߽Wewc(\OUKQnmvnFCR.A}Yl?b5b'Y~T02L F"ΙcuLY ;Ny7`HUtb !zvϊWo}wڬ& Iɼ(͢HpL"Uj؞~kU(u't?"Gu'*UlMoS'aӅFA^Lx1\:\ٿOni&k%ZgT놶JDﳪ n:3Isq6-CV,FwpZY [ usǿ~Wߥo_W/{{:}ӷ^7Ip0)t6@/bNCۓUW]FP߬jUW÷W ^S+gSn@d?N?o+b40 Ŭ q?\ >OmPykHT\}+LZȞ(r 9lu*Y졔pnT[Ukb' ?D#!0ʬcE ,k$ f u# &L1't>6 /C 84k.yf^kBu0sRMCzk:fXh8}35^$*3 d'a";➖?psDh2(xE}HUFl>5W^-Z:Tl]^RDzR9a0=?cK|i[i;ki~97BOрmP9 \`D$/h7wpLW:20~i$9G{M-5pdLd8PUBk&%!)7y]f:F.-ia6ri~Cɲկ+o󛆾 ? ?uٷw.ҦnXt?Y1775%+#8hP\ksZ2ꁺ#H4oJ&#TqI[7a=[ƈ \|99+fhN"|GBCBL(z%/E bZbLv1SDtbv9w,GfeǥX&zL?}{{H 8ݾ(1Я9f ϲaREe~'-m`48B:R Q Ra򚀴3$Sż{wDA$ˍ j.% 5fcH[)WZ*}Ji2:¬t,1뵝!@#|!uަe ;F ZPv2MmTyx,8Fu:HLI1`0`xDLpNHcJ<-O1#l#L čBaiT* $ xILHN6NbűPe_Ed7Hrvm}&xAK}dxZ"1bAi iwuմؠm=A(&[M$E^CabN {-SHm8*$rNVū/q绳tZ۲,?9ڎ+.#߽rV,͟чn(6Vhƽ~b֚$7)=;'pV)|s^S*(!f0:7 htݚ]W jY*B ;%>yF kGkauobUNWǙ5X,zت.VBpڔM^W.]*Oy"sB[uC(!ϗ)v.cԽų[V/GV ']x1f#+g馳yM|ǛY2 MWƦ<(F?x@aQM[%f7.U9 w=KM|:_)o[:YhIsԡذjkLnݕj >0Knm8;[tC5ղw2~;m(DF-ŠβnHeɩOL8Fi,gUo>lsq,9Sܘ>ϩCȜxBE /B>EP2LcDxV|XE? 2iH!E1Â=B@}P3ڪZ׀93I*kDjԁ)`+$mcHSV!Uakj爼9ZY}.UōûS /e^hMfw=u>=blc%0+1M3^;mSv˳x^`9 `|A搆jn(\=Y9}}G%8"" DȖa1tFhO$ksE˙Bc7>2==B~)ܝ&-޾tE1hVSdZHɻ@MXϧĨ(X!%'(PQEl7wIb]g5 g:t_tzl_̱3ŮhGAr .u*uQѐC*L30j>C>QAݣtT$QY3.28#(9#3RJ1j[ }yfsXf ~Vw7l*ֻ%2[R;BƛqoЯKޮ"wzgvLx8;ibqqVnrl^V:-u*TKnCJU0TM:e?EVǚn4mXt|Ɇ2G++1(&F.2kjcP`Ʉ?34[?Mw ?SgzPw~gдr[, se@ mTklF^k+k(J44O#kL%ѷ=fp)%QePtR,UOB)S X@W\ak InX1?teos.O,$~ͷey.GgdY f=^‡_zytzU:ȝuuelDhD<3\-\uVgwT:LiFiݼ͕Cg׻9_=kDH\_|υ9!G 7zbkͩ/of9K\c]ƲFoFnslLy LF@uj2ZMc/hV ^~oȩ+gK>z>Y RR̀%L7'obu9B3atԈ6Qds! +Y4X J*cPd`KTk[ޫ9=øǴDσW;X$B@Ma6}vU-WH1 Tm&*%C,i(}{MT.hl+#\z#?D!L2Hh2#DS>e4\I]NAL]*G,K'*E*c@*BHSչ!q4*;Yƅ2+2Z*sMIY=  &$1$ͱ+l@Todf~dUaa7x(Rg, ޸vcc[v;^*xvvӋs=1bVꠌ2ʘDk hK"qf|]xSQC "keRF74e";X FXSM(%ؚK9;]F9Em3S3QLʵq,lBΔJ9pjjjPV8˵)h-o+Ys(riM'XEe qbJAH^PDt͈8#ne $>] تk3QC*MT+\Xg,{PʺW jvkÙ){,X!sHZX^"˽ُ+U'Mrݼ䡸;㢟qqŝ=0k!>yT!+y} $#22dֵss5.yC0t0<<u܍2e*{W\<Pԛg_X_oy}XU> R`P"!T3b0m/w@AP(a_Q YbjK'qdov*ұ}#JN{ƶ dU ,3͖4{5t&9`w}yqv?oU術YrNYǫm6R~#vlZln/}HhR(&験IM\h$3qJb]ED#. )򚜃X hؠo M!c>d ESpAc7si>RfĄ1TtĄW r*T\5Dbٗv)mˀm Fc=)|k$vJX.L.@{и|Z:ư|hLc &]G*T|ȑ!Lon!@Qգ*-9ڋfPnwpjiɠ9L@A=6ƷOH~WA;ҿgkylYCe4x%?y04;S# رx(i UDaP6˼2Gѥn5XAk]b6mM3 ;=&_llbh"Oͷ4?NqU[1S硘A34! z1+}3רh7k}ӬA5rV4o rNm azI2!J@%#ZbQ9hk נ ZM&[&Td)蹇I&#gwȲ;XOb3.=3.Ϟq?s^Z7r?ky$W#CzP\VU3a*pլapլnW)U3U3ׇX8zvլXu &W+VN㇫fK+@O<` j2p~*pլ%;vjV]D"Fl>Y2d݅+oN\VXv@㉢7d(sJTy&_=4ǜ}ClD_)wEfJ7sdVI0KDiKOϖ_"sv;R/>OVK;Nyy7776lX2]\moRm9>Y?e7ynu+&o烴P,/SpKp:e'5s5 8 Ne&\κfk~V[ q1ة$O{h 4+7 Nf.rT>4s \9䅱Upc +1tz2ZKcfk9_"\>x۾v1=Wrpiϫb_llt>|_VWiP.JX3 tUj繸7r~{[l^0Rs|vqqyo_Oo#q7J@ͫCn[Hܾf9xHhb6f"OYc.MsCʐB1>_ o#W>#l*d}6r7ūǟ_=>pBES۲:df.N&Ѭ/iVy^~2YZ2c=VYnִx8:Go)$+& +\*Z[GJUek lUDrMM*\M͹ rR%=Lf|ܜR79ow߽7<^}5ԩ~|+ 1pa!Sj(6A-Nsr[yF B寕mf`A@,&`bHJg9y6KJqm*?.Bfe,RpXVՄ^$hKQ˸KF׈so1n t$<#@W0nPm}9cehn?mOCk|ueOF,yr:* R E&]fͭ;ᒊ'IZ|(&x+ <гzf(D+AUS}4^Œrl奪EOLkg<|ٻFv+ZڎH'n"LxxJQ$ZWNI5V%P,uWǽ`A J$N&2O޽)cETWA[ʚz W1 L'0YY.ժ6х`4D[ԁa?æجТc?j $PT۲eOd%TC nsYLCZjb?]7i8=> :Lj΀ٿCQ^qS7OtnII|qXDӮ^Xr:id;^Pg.{X'H;&=Dc[2n'Tg!fAk%R F:! `U<2:#:F kIRT"WՆsL`{lն~ߦ%J:\П ,#__3hA.YP1^^5Xg1śMRђ C\@:>FB +69$e,Ȟ5Y~Y%!FDmB-*' ( U Z#Ѕ!a\]Yhv/&wlMFb@5''FˆĨ8*ÅK?-96kT l(@MWa;^e2خ5mW{j1VRۮ-O)`f9دeo2OAbc&ÝLz_`i୍(bV;o|'<5r7;QM^}oܧrDƱU>݌EǛ4YRrB>CvsӴȭ}{wջd<l)c/捜Z#!zvcldD6-Oưq0W*P[l:;Q:@}^Rq%.Hu,㴁S65o]|P5[dN% 6^zKKr{̀~ L)X_]UWY.)Қ I2 S$AbѠeh〳>:%@\.3\vbBo_1y^mw ?@+A=yu4Ƿ?.0\(wkBn.5=oT1*Ҁ.,z\ ʢqI^Q圃WM*rY9êW:%fR.7o1,F3Y,UJpX>)6Y_ZgXi͎/Bf/ WVip[Sy~D7V|"-ƚ@k>ft>z:[}ѤsIޡ#蚙+@pVw!a!>fJRZ79iI\͟Pv4˴y~yO{<=a#d,B=ם88lE\,ܚu?q@474.ǒԿwCK ?em lsY҈^f \frnF:eOT*nO(n2gz-4].Í["皋MЙmMd2" %Lz!S>cUCU7jB-gVyP If#br<@HiР0AȳpoUfNVtu|ujߗ|ٚZpެFhg3aeR<i!#XpRyH<;  c#wȪQc޳YcIepp>fY :e[? >l eIQ\+kD=hA#gf(IA9'sMEP#i'[V1o"KZ ef6҅N/I` FI%%MdIsCɕ\k85}WI/.6}uVs͠yN6LHNZM FLX `U"q+:֋}чդ\}h+C{> %}ApC-2/ƙCr:}A#ǠnYPh'bn3Adɸ+,c k!-}V89_Ϭ~_rL 'ƇJ5 Ky-FjKIغ$pvF!(YĵvĸO6ZLUnk;il6<[oX ̫m v:: +@LJ.-?2"mf7"sS]#%2-]Az/O%Xq^o'x,` 4o`Txmѧ8:gV(()<:l钠RT22J Ȳ1 a],Kk3ʘ `KjT(c".nVm8OoPcHNhArP[IS%c R'h6U IC7;O0Ͽ,l)_rftEbr >y%TȏRrCYkW 0},z )9A RdlD珋GYڏ˂@rl n0 82mvdz>Tu øEY#&cw+{{ \66RĠ_}]‘{Bar2cʄ8|L.Tnwȓ3"Lf̞cO`}(Z*w!}|YC5Y@-WJl2gVaeCOQ*z5}MŹty^GJvAJ&L+O4ov|ެ4\@T܈&B"ЩLXAeIsIawx\5n=Ι” d Ǡd_ wj(ʀR3mt,Z)%ѺBg-,T(%[COcVG#]*UdEMǣ`{tSll2YKi\J955؀g(121qi5CO61hYMRm8GZ?C>P`$C٬w| YdȤʎyCtn6˷$>h6* n7k$S!1'Lv@'=jkјL1FgOe: V@f2 iԧ=QϡM9ooxzwZvztOtdQf!8hB̑a, tBQC #*F`HofۏcdP1pdD怚,:Ƣ gښ6_A%q!*?87IU{jm>lZ\S$CR$(LiUq,랞 B' L k屌Ç0en:i=BH!*@r0?vbe, -!nLkMq(Dcd C:D+u bAz7kU(,6Ep2 #bL ‘1@PhN>Nk'^gُaVYRq~ ,5>1r1YO8M>N0_dfa'?lIO A , ge">+q/ZLRgcHD0: uf2 x2xy:}Wl[0 *:mqkBTΥCR R^@*_`z|VٰLm[/XR<|YoP~;PG^:~~o1Qg?%X` M$IFPC30^54Uw9قWv9q^1b g& Jg^}~9z(^"p_t5;e4U>ރ{#!e4 k$yT3qg8>HlT^^HpxtpܦgkDoDaMÆ2 7`-u {fu1F.by8LgW]O'u7uMNvDŤ;˰ꐶ8i;2Krzﶃ49RB An)1HWʒ=nsg38!j ?24&:I'`cB SO%qrfϙURJ2T0tpԖ%N_7A*b v-]fF\AA!+ wWodY-<*7s'|)C"׽t/~g3@E!Ӓik CmAϯN[?, I|}k;[ PH&߫U?_nqcYBKOy4J t{aR41N~͵(VR\ XgA'{,|Ŏk{?,8񗌒ɳu)KzЫA(ո5mc[q`ů~fVWT~ h~2,fMzq'iE=pq~(|dmƄDKk7PTf_X]bJP B1&n(FZ ~.nC^Mt*'<y,! 8<NcʠFWs[Ƣc\GPP DP'G?dC5چ8ع`K3j?jK{u;߷mZ[\>^r쬅 $g\Kr1 ܢOow`;jSID90"OyJ5;˜Xw@$6@knX}][%fD(؝UE~5nJp;STɆ67{G'G7h;*d2DNzMot€8ր O>#3N:e`Gգܭi{+xd)abXQWt+JRVk!0}7Z'Z9ểB],i CIW6oU.$]S8G+H.-<{]`0 ӧid[1˰m߮^Qt4:s#:2 VXIY+=- k&PY6#uglFEL,vS_߭c⣛͸xGӮy%[@g] .GD㕒/mbmK7Ŝ*Qn37 ݋Hwa4ÄUBLUe2q$y;U3hH̫/-S.Ȏ糊Y-ȠF SNOQg|A1ȀNùrXԔ);}>+lzqM  eG([WGLr&m r9G̀I<)F3գ׷&&]ӘL^kB|Vxy6م}v5e f#Qw@!8r&R&ϱc2g0ȥ8vv|[YMvY)t.v|[%1٧u`E&A%zhD-Q CIQ"H]%\K:m[nՕRw_]JR/*QDu=RW=r퍺JnkoT냺z:JSUeX4W$EYǢldƓĖeAQ|b'M b.UnDm9stD*伞 96,pPNYMUU?M638ju(_EǨq\{!׊# ?#)Xe0iit4d:lutƀz7Zu+W6QI2p "^J/iu{~TzϏ^7^?7Y?&kݖ,E1YrxT`pGMko2Frn#f0'!:`9FnZR ӺCUTokO%~;Z.=+h}P5է";EӅ>Ƙz&' ^j^k Y6췣u: j%볞$v9d-vP-C zL^> .*A*ejP;f^Hzܐzo笶J;0^).jllЃCzS_SF_jRz\TknlcjM}VZb*W5AFo/B S_8h6.O)>tʪG)H>Qla -{r֗{nq04hhh_n[Q{ѽVoھ'ӛ.Un|Q=Vcnuw;׽=ZrT3KV8~rZQ)̪CM5+&&Q+w>&QO2̪V[=oMAoஜKn?(E#D6wD ,pV]AU7άe9DL:#17BFNyE5f( C(Pʍ"VFudXDcŠZ )$놷sZh::5NΉ==pSqկKXn<4vDpm6O<5 {bC@ HWc(Kp+RTaδ<RZvh[+l'j$s Kp`CIB4c =Q2vACKhx M$Oц[b:Rׇغ< >3IOj1s)-xUf2ɶ} o.r2E)Ѿ}[¾{.B rH# be}襦0+)c"_gZ,i]H0uKchɾO.8^]ZL@yM>jb w MX5u[]jM}w2?nK fVڃ $Wc{. ]' e˦. hqFU5 \dP{lcesx݉/EK{6!(c D`,\HK;81 e#}4P0T m0㥶Lc1>) ofWOp!ȹ}U*Ra;o}R%=𖆾| Fu{=!5ig]"0:PK שVʀ eHsnB1iH>! Bi#Eƀ3` +@ErU0F.wmY_i,dc(1IֈmD 8H9J}_u7)RbӔԔhAu^^JccITHex\rA"Fx5㨘[;"K9s´־ _;A;܁34Ar/rk #ulJ6\q!}?}txWx҈bfcHr&:'BG ZY C֑֡ќCcM[ia fY6j4 :2(fr"ֆ`dF)S2`K%#qVGm7b#)`DF$X@(fEwZd B' L'}LCX ?N#:|Fbˇ!AxA s^Xa˜^[a'!( KXxygws-VƂ0Bs#dD)30Wh,aHhqAn YB3۵TۨGa%,bQ8cj\Č8uL#oL!ꃏk&x&k>:^0~+V~6L`vaW:-WB|ןlPU^wٹyXɋU~Tj~s!0d.C_%E4+O`3K`/!F"ҙΙ@cul.3섡Ϋw˿ ۂaP& -,IwW2访y~ᶟvP8]c"}l4E&`"Eb^.+3IwQybp2\Q \wh#4U%8m1i90ߨQ%<6Tɵ.7ɺb3Rћmqi~ppv^-7sFяyzo50zkL<%ƷtY mFc^fyDh\'4IGDn\;I}nu9ȺVk*%RW,؆%9lbqeo_TW*`O|(+יLa; Q^~{W߿c'ߞ߽~o0d]:@#:pӌ44l*GeݦF]M>&\-LuQz7'IBG-NR/ENy NhSj#= Rf,"O,gY 9hM5St0(i`50Š3CInmv6,Y^^Ppxp0'<Ӊވ7Gk}AzaAN0ᖺ {fu1F.bٞ4r3 ۷} ?Pn\j>92ynTڰ ajiߍEn}_Mm 'Dh Nr[JL.RBC4-,cR']oIkGv7e ʸA@` 1ːD$!f(! %,HQCt/X2{&*اtI8%b"XZYh6iƺi5 xt|\A/GW5 x1NCX&"[1~{w _>5ƌ\yiUƜ#AJϽ`k8DlcHP=ǚIPiEHH &o#[io_]MnE>jS!xoh0%[־y&}CZ4dDE7Ln=pj>TM$D1B,90K*,cMCs o|? ηD-!:p”5yJ'iJb+) >DVxbi:PHtpUuPl#m3 |n̚vJ߿'W,@a7zn QP<S`Q ju (X7 [16q 62N0FG`#@5@T!4lQS4oճxթ{)EM[vB4dCRL<pUn"El`η@C Wh2mYrmfYB zVLL ӽ׌DhqquuF0iu7{ά\`uHW;+ZlBф|s(uu {m^A-"Nz]W"*=@Ӻ(!kh ZQ<[;9 OS:e#K+@Y?8:'Oԭ> l2x_XB"PL̋Y5̰Xo\ ϒFU"Yޝ>DPKf-F9{@8pij'*x;6TTF֧.@;@#]b&I~(~p>Y!NzUld;q[34 >ywi&.Hg?9LIo茧g݋jGhe\vð̡/o^W˪m))` v_2&Xw?L,z{=28;id<0x@$zae2c7߅y׹c]KХ~a9kxw5ltmbN:Mk5I򶌊%}cmGrh>U^ԂLҽ_\5WRk``Ѕ@^bZ-/SvyÛz~**E;QEnn3No`vUl311yQ C;"]{o8LOXͮ/}w{2b43|wL ,w.r;zTyJv(6}2r9:Tӏg hY3"T5+Ɛr9l8r~t=_Qb #n 1+9Ì(e4"E"!G͜Dr윤)zh6WGCJ9ō8`1waʽ3øĥa`r46l7֝}| #Jxn[{S=ȅs׉SU Aلw؛ L`]i11bštЌB.Rr9e&Y;s\ΜyJc.HFeG>Ud)8Y vc﹣*ZaRE&Z+ʌ;E}KcP mSX}HibtA1Z%kki:Z%<*k%:vA!L"[ÍZW +gktju nimg]0lS RuL*wƋ 9.`d؈[<@rFϵ,wJÃ&~}Lqg _0$CKS.:@d1F: lp6HFZ{1 1VZE`"w^!9Ed nC60P.Itaه"le߆I.|[rm>.F㏣8mo]R<~4@1l`NŲuCd) GՆZSʅD251x(mUgR(EL1R[> \q$Ÿv0]Vi`1@bmylQ$|tFV-J2QI(-SdVT1勬8dbѡU*TN8K09uϩZXˮ^1,%(at\`ښHTȝ "tJT@A!T nV?یN!sXe 6j-H$@ʼʬ}E<@w"QiY)%/u '5MAZDzҢCW=7"\INف}sюٞA[ۨNO=KxrA4*,]겛ΥbbNF)csT<!qJrC >+OR(а&^e_] &H0ܱRbUB :4 hC"&)F 0b- E_ _h>B9B 5PDpJ j1qjӀ!-N](WLO4_mrf߀.XNޣ]8!0+(RBrh4u@.@4 Zq,tJ`U Jn)`Tid,&vd,Uaa1 Ea, >~*3597_~1̀ʎFϣydFe9d\;a!!T%+~:PMJT 6dgFs \*/I)\cBЅ8Ma jӎ}Q[Fm٣v`7eQَQI7!7,?^ )ױ3T};xE0 wZP@9x\gGX"EDoXģSblک_[J?EDUU="n.u*ҡNG21LAq/&[Q8M 18вN5%D\* $( X8Ijѓf(|Vi)q#bK:2rK{Ŵd_\ԅqQNt2RiѰE"\pUTJk0B:EH GR[\ ӎ}C wcg)~|GmG M˧,a O(з|z-hwJ'oTB;*)+!%reEHZWDT.j[9 5ƴZ9+{E_s4?G&nH(g|)ShK F8GY˸ʠM"+  u(oˉS행q+U9dme61}Y|Y*r3nűwƙ˅%3 #3u,\V;sYJ;s/љSuDpVh*kJ JuPJAzzpFSŎP`UWUVu~&G"`Ŭ28,5a ū[2:app:ſڧKj*Zcx*z; $Rq]ؐCЦ%!ZV l q,dBotLd) $c BѤ \eiupJ<"B GWY\q4;YZMWYJӳWP`UxA*K{zApʬ&kguu6<%~?iLs4|%riR #803$KԸ(*^T,X"Fytd&Qx,e7_r)~}5X }*ysLR` &J 0 y+S 4vcIиnJ_ݽxvfL?ZϔNq×$#AO \Nwb. lߜ)w9TO1&oi] %wJ, 8 Q*5"V9ϣ?C/%Ww|KϞA9s>:׉weq)3$JfH)}MWT;PS]fO:ܖބwX;gđrՍ hD TQk$Ryf3`)kLHڅh*<2#@'ZD™ٱY_Mg/_EK~K ݚjv9tEbxdػ>7ݡR61ˉt(̕ I52Iu0 dT,$ 2|A˭qF(*uE%Ӂ *$aRg(eU!HThߠsr}P3ə9,YֳPL $}Ϝ۝Osdm'yVczBO<ώp'a5!ē\!!:o)QL/ĒzE¾EL׭e?|!:A+qqVf)QdqD$ (u`}9-u9[}9_ XzZ]G 6EJ{vC]@=,ėJx6v)@5x0tb)= I۹078? W{%ɟuZ!J>e3'W8.(b2q;׶VBy'% \/whvXʱq[fGSSc.OmO5QPyaPf}`Z6C尀8\*9IA/itG97ه&⛿yY_rOd|_K=9_k\n^Of'q8WxӕHeM0EՏbFgLFׇۧ+ua9]mL}]:w ot8gsDJ b4-dxMfgH qO9)T'[ԓ4LgOF_~ov}, 㨹aȎ} B*YA@9:hcD.CW4Kѐ xX.w H,,fe&PV; Tx㩎F&]CH y1qv1q$-.Krzn޼r& q7=1{l?8t<{v]f&i޽~"kRڨj:X)0AI +5ߋ~U1J+Us'$E'!q"|%!" .Zꥠ!`"(VZA$ BJ@/Oz͵ |AtжzUعXu yq1~7J $N%uN+T|9Gu174>@( $dp Ԇu5)܄g`/(h1f]ww2}JJE Ǖ++"@պ rQIQq;jZ0t=ȩ{΂$K$Ev>DVі$Nq.s222Jnzg% u1{v3YeMK|˸tq}l8 /S17[mEvq#$=\g7\&Q\6&^! Lu.d8bNUHS9L`s*%=nkxchf% @Ygz_`{lxk`=I,-9܉뷨-K۔%w#q:LJEbb+\yN &KNV5XR_jWPm=[&@"!I =^]?sa[Ex;=;s]^_?TK 6e ;tlΪc=c1w\c< ӔʤK)QI*hKV6iKB@O"Lb UV`ʼ@YDJ`D ;Mnl&Ύ)s|LO=.;of5J ^?arui~w|kk][XLF3Жg8b*GW3pne:LȍYytݕ@h;Deid1Xܚ|Hn~wC6֎cMw7`[>y06$$osHFSxڦ&0& rq!YkJBmI'4πbV"bZJVhA`u>p+qOa^)V. ɣFʪQ[SR*!P6! )=*8QB['`4o4%}!mHdn!EQ\" Z2fC9 DInU(gR$$o0X)xm@TY,3C2Jr"vcCrL3~3^!td"S ~XIEoA*AMe[ؙ&7mɀuuXYxel`&4-9 vM8/@:K'ϫbtaWcc2h 6֫ (T"1Xao)9YKMͱquüG~k470V nPB;C81rر}dH 2CRw|%Ev_m}EWϝݰ_@{o5Nz/"^v_vMجW{5K.=:2VUjZ5lyխ:kQ"+i-2 >4{z BVzmk{֞XT VN$CL1ٻ0D~Ц(HtT2 U2Q4R |N(j(WtT"BWCFuafl\}nZۥ&W;pn`Jxql{|ρ SNNg//xd?&zQ'5uf#Cgdт"W[tuDG}}𠣠H J O)CNR QdrR\%PŐ.GMP [>&HOߤl$/~_<|0S "u^eEU y9T-" 3+G[z_/nOpEbTn|ƣoٹ/j>.$& m$iH6&S jiIfo1M śr6>űm|VХ5v?ӳz rd֘AR8T=NKIFަz%y|wi7ޅ9 WTF&EV׊RFU Vh}{e PF3Eһ>_,X-FR"wdȱQE]*6QNDӆ7^`᧎b/& Be%REYcu0XTaʆ lK988-b(SOcGj|i!n:-J l?|ݏ/bYy9ŠX%jR3.[oM%I+E:Lh 4dt>с*Zِh<+_ֻ2bd3FCQIUhML3QlۏGiYw|ګ_~>Ŝp_ch?<\]2ڈКRj ]\GLT]V*QaP#'$<랐p uTN<NȜ#ń@! 3N  1(Q(b8ڭ5h^f#Z}8]Z=_ETRBFy1  Ϋ)HV#J*U'@GϦ[?fru,QX"+ ESTY謩M牐5: G!ܗ9'8 2QWK4~~x_5(U|v1bGYtc\1{m^x ys::]U)!nڜĝ&5}Zn6ο^:qV^7I^w: Qs~مﮧgޮG1~>,cnlW#ht~vp/$ĺ m-]kFmƁZ̪ē6hfx(U|4Y9U)6VmuRy9GjÊ篣q 8 ̆j։v>_3V?9zۿ}wx?i߿߷;^q#0ͱA>_>KA{Ml4l-д^ .횖]^ingy]rg߿OhrjG|^W54H?ay_3'xL~&Hw/̄Xn@Lnz|0x) |_w}btM|{^A0( dPZdY\f&jq2jWv#tai/.tJ@C |6PB%W +N\q5c4әձXmO*ל~j͟|:}[S>vfo^食v3T7m;bp ៲S?jnWz۵=ə @.WIK4}J. kx1)@7X.WHK_D@QNe>&9k&v6q<:'{O# >(=4ͷ?ӱ';xb٘CvK}JijSU"aHk[BF'6Q?yLz4.z3 @>loॸ{޿9ĻzEX j9h_ ?GVñsT)9A~V ;j+^۸>\U)7h$-V j~7-O ޮMX0I(n-k`B+m+$k]`,dk*U-th=tQ+EtíWжUDաԄtt*-n]E"\ NW%)t=*q#e:]Ek]q)]9+SPӏ.?=QV 9R~9,vQ}>o/Wz: PVgQ#٭{GdKT`cN׾fhw?O(%*XS0e8.xM.*%]kJjM;Jv;:!2Oqʃqc"z#& wn!ZW 9i{c*B>hh-6 c r+Oc\cѢUhkV`b"Ԛr`Dxz+'/7JWXd/DR/ߝ"J>ERpF[DWX`p򃷮"JYWO|Z.rF`߂COZ9Ja%Wίxı̴p'Gav#tvHv0K@\Zɰ[<했?b}u%9Z|rC""]q_5#Fie3s'b}k6?(̇%P28*E")˘ Y d|oϪUIKV^&JVx>}u@.q k~t"I-}Q^,H^d3|󈘸j 5QTZ+o˂º ,~,Mi>eXS̬c ŝ^j)<(!Yk8 ac(:9<zYܜ}L7MRgRPxC a]<3KM)qZFe)A@ q5">2(q\bYl|q TT\vFy-'N-Vuc0Wꗞn䕿+U@E.`3;YsH.)z$e+/v5۫*gcEi ˅&eB9Dd朱)Qd%J.˸)pG^Hai,X#̭f7LPÅԍ1,V3t!:;"E Rr9&L* ˔fi%hD0AP5@2An>{ubQ\sM蠔V](㽆Y0--s{m}Om:P0NGl'55Hbe,L&L@wc(hMqLpʼntȬY@\;EjVJ[=rBoCmg2J`@YRfqI1lq@̘h@PK<yy1jD|:|8 q7eo?,y_9:&d;3:ɋ@0F3_UbdCʽ 3 Ʌ6O'əeI}źaTt%p'UR۵H2?#_-;wBjpkb|sM۪![јof]Xނ0K~g|<:_ttsx:]8MJ.-mH+eEpiXH|Oy98_ÂI ;b-\/~ܽjF[TMۡ*VK&ozW]]:CrKC9,\_䧓 n(Y͚ ɉ_AO[\]DK0j%{W[ȇrd#rL\/_P|ex~ŵEkb/ ߫De12 ,k$ f u# &L1't;顃 KKKtv^Y>5azX %u}Ny.zk:7"Ͱ>Aza@N0ᖺh {fu,A( #2,5Ft0O(e&7ИL\3 Hmb^~rbNW/#ӂQ~k Fh|ͨ1Jr V`]؁?=OI٤bJJ͆޲zPX*?$(bLV$Ld_<В#: "4avv|H/Fl^[I=ZXt<+cKɦɠ,J {.034hΦv6Jlﶁ4qFBB RbR𿠹ߞ_@F@4aߘ׳w/צcD]3G*6)-wK} xIxqb~"!8~i9Mɜ$0 YȄy!,L{&*KI^a 1XZY0liLj fp4?[{͖Hc@ͩ'˹yIU>|[-w0zLφ.kKPiA>/OU0Z ㏗%Is)=[jϾtR9LI+-KBRx`nb6%֔f_VG͐F+tko;٪ "9%eъQnS J695.KO1gR5:QH%>eH] R). s?;(?cα[=ѺcUQl с3VR[M} {TDVuw #z!Up&3>H/5eDdhA #(H8H76gǥ5.G?jW]66vC;u>5I̚3>&g',i"6Eؔ944%`FIG[#ɏdG#mKSHNkm$-gCEY׶(#}뿶cY)]+!yVgSD4r3Zl9I p1b lӰKZz5o|N fcƴDH8[2BaTD3$Wp:FF\qG 1<1 Szl Rx<*XKϸR29¬t,qQLF(c lIK13>+w'} \Zԉi[ݴarZKzRO<_CAQ92)C FH\H&*mĊcMᑆ}<\7nx}ѩSq9Z*eX}fAL!3 Z(H{<ÊC >Zk;oN+^4-GC4nQԴg77c!{  \?tf_8g7G i?{Wȍ>i_d@$Y{w_g%eǙ ߯ݲ%[e(56]|~WVx9Č3;a ZiMyﭼ{VWf7CЉO2}ѭri\}dto@Smz6I;}/nxɉ1ew3RZqS)O\im̧;OYP4V_>uP?7fwZ1=[\ɸEmԥ35:?~,x^a/sWGݞ_O^M7gwo>Vn+T}͐9a>;lry鋃]l%L': Bk{&A2~Ϯ.@gH79M˞_Rn'vH.jv= I?OaN_/vC1"Ӥ0u<_33͙* t(NTw 0 #'i?U:1_]܌oGk9s +{PAc-0A:̐ oiNG?\𕾟]Ib۷?18* =Xl(!ϣnv5j" E.?y H($JݺZ9Qf3Om- FyfC^ eN>I>uK=|D'lnM7g<,>]&47j}oQlBnWW:6]hsӥlzT O|f?.:?k{mT}.:7ED ' -mK1+ň՝CWwV5f"oO~1zƜLL,gB*Mőe//ҫ _7:'eSAd噙u`N&#tR5"}hzC~y58P|uCT !򾽃*£Ǫc_-ʶOדN_i75ݦwS`t>c}Wa#ըYBcH+`i#smGZFj}D\(@;͙B8hws: Y`0`dÛf_섍1ry>rl >}`D,E-se[͜}[\窱r?VocɀyTR۶2kIiUI:DWỈ_x)5x3ij X 5@h+ &lO`}S4<0esH*%q"CdȉA=+#"pD'ǃVL_-r Y&{&WE 7bTq,gNlszRmKWǺ=W Ru~!ꝳ=֌]aؕY%̫pz+pzRn Ypz<J8蹐ջ*WԎcnJ3)w>4 4Фr[,ooHbc& i)?.ާRlLzed+skH׈{ID2w++-fcaz}=f{%J!o&tMjaBIHssE㔎.!2K'L) {44)^sA 9W^#`)"r.^:`F:ρV`1)z6K_b&sbet%ѕ3XgZǴ=뫏b\qWn}nbR`֎N鵉vYV ivG,ӳuۦOt<҆[M|u"[H;ȝ 6R9ή(`o~"޴oWֽ|r e+gnR|ݒ[!v7}3+wn_u}u BnxJ{:W"SlJQ(3.v!D'G aW"'S"i=F"HfJ͝2Jj%xUjf`V{id(SLNA2ZIDګa*j] \-MWN WO>wiˆy훭ʻ0F6V% ,-wJ"R?G'Y0*t S"Fc޳#q)R(JJ4ѥ93)j\R9WfY:͒Oj6_J`<[?EDS O]YYړO': AȠGO[qyd\Y2x]efiSmֆh@3>tҜ$@C(j+32g7"Z'\\,3iX'\ZrEN H[e&4$c&(`ȽIˉELBk'\qXw2 bzgJsܖw~|GD !e y| !YFdFBk$g̡oNgWo6hz&]Ʃ5^C)tM1?xVʺAׇ\I OQ%AcZg.E0}5z,:`5FJksE*X [d_ٜ=m! <Ů=qVƸ^jC\2(c,'dyo/씛sUU̽P0uvċ4EєbHjCClOR%Pp}4De5U C,[cCRMMc3ي*d%Ur rIq`2 ȁu0!0hA4ˢ@e.+C|@{^6F.LG?,qpUHH!i$"! KI,Z6f:C*y1dQx{/ QAciC$`&1l(" v!j(J ȆUԔ&Yw43I&]]s#7v+,=T!$}IakW GP(IϹEQ3j(Ćy[HhX6%dh E)taIs(XT5 Lڲ<Vkj\ $qe%:k1i> VD$ D8(l,,=Dbc0z0 Bs˨ZQ0Z=YBiҸA; _{KQDDLN*FM&VOnl1 K6/} U}i_l}bKOwm-Uv%cc{֏QBka$kT^ፍPNhB)hzI+CH U20S&SPaN`Ge&GpΠA(AzGZT}zT [bc-X@|{?+IISL1AUAU7Z`1O|F//bp0'JQYQcDԦƢb$f3xag=:7g5(W Q>FQQc@sjo6i1r{aƕ=& b =|RUzfRLFALFZ]5gB'е-SU{ZŠZAtO}gRtNv'hmeg+KxIJVz~;S'Go|,,ow]OQϳ?M DU&(l{B7'uq7볲RX[3k_NrqE Huٻkg7r$c]7>}L/Q~ Qa.'HۦmN}}gpvekKvlmgk;[vlmgk;[vlmgk;[vlmgk;[vlmgk;[vlmgk;[_Nirwk;،](*Z]c?ibNZu "VҞLi/&TCw};iQ&v-x/zGlQoL*ZK_dZʿdI 1;0Od'տf{{G/ VWgN?r^ }ƇOpzߡ?ЉO5?PftarSu [ƙ> gH]ՙ uT. =ogn6>b/`!Nu{H *jڄU"1ڄ2%%RҪ֢h֬ѐ)"io%IT sy2H۪Ն["}$ M:]$wIIw x3)C+;tU]#_J]w'QDI5V|> Ըt2ߙAru6Cfi<[vG࿋h߲Z|9:Y\/d{x(xw٧Lsۘ䮥'1p<`L)3 HXϡO {W_սt[3^}r6}amGrd\I4UCc.OXs Qf=6^8ktv92E?~I3AO񄜈3q#fo"qmΌΌΌΌΌΌΌΌΌΌΌΌΌΌΌΌΌΌΌΌΌΌΌΌẍ0ZʌE[Nf:Lf7̈pqL1M~iҲ@iHR,I  LXo f3R}򞍧fl_)iw}BNB ڐ PZgBX-ңL J-*ӣx)2/Wt˳CvfFBrq7aiqS"=Pm^ _q?~Wݛ?}Xq>WIFGUz$#7XimB?T瓪E&+S왞pL'J9p# .Ś/T2&'ZGb&xt պct?g9CW·d@EZOOXS9jMKQV!L{dwG'iS݂WxW[[S[b/&I %_%_%_%_%_%_%_%_%_%_%_%_%_%_%_%_%_%_%_%_%_%_%_%_z]Q@֫?|XҎޝTƷ \kY֟Q6@5EpwmfEhC{\PJ8.㢴h i vt+#+#!`L3tEpM3a LWHWVzZ qtE(#g%]a6 wg}p7@X[orGg/~YZ\/+\jtQ${Ջ_qqy~uւ<|v7/TR^O? n`^_/Tb?}?^/;tuu/n̗~| \N4~]>S?{*?:'v8(c3rB!nD1xϏէӋ Y'_h+?@Y(ZI##vJV3#^Kjͨ Q֪ B%#TzBdž6eE ]RͬZ3eB9FcmؿNEWVо6F%=UBK+Γ0Bfe0LWOsp ~p\u`uZst3{po+t[/5j4DWllnx]~tE(d:BR::+C3t^Vj5w";S,Е6-,g\)}+tEh_{itE(1ҕ1^ukm$Žʹfzʁf aƽh)hlm宋}4MC;kv.u|;M#ѴsNN6k[ cLwuyq69qu tu~Vo CWM_X>{\ؿ}saaX*ƚ} hQr*и^ʽy{%+ b٩ϫ% NMA/~jjJ3t&hTkx]Ew)p{;sqCd63uΉa/8}!|3 UڅК.y]vY+7CWL,7 ;~i㡫-++Eh篮3LWGHWi[5l"6BW6ȹ܉bjo{'{pp=qcIcM%5!=t~hB9F57DW8F ]'aHW> /+6CWE3Oms+Mx*Dc*v\Z+BfPc{'+ A̾$?]'zdC[zl mRW3{ЕgzWB;+>"A͝L'LWoBWZ8%Z+]\ ]ZNW1]#]jO5~tE(t&teF5DW7 ]CyCi51ҕ 'BWbC捴Jdvl5Q{]APX-4 f֧OO7uz^ Ų]ɨs3nXwOgMGdo[Nwi//RR|taM*~ѯ?߯ 9{7קeDt&6/C28#}{6n-eX*`7uzm$(0ZQ5ȿa[e{+VCyH 8wfuJյluk{[Ip i / hoRVjj /?=ءh6$~^oE}e W<vՑMo,u~)G?%.qIhMɛVXeKHn` [Bt)k4ƁȕTJv9K :.A 3?\$m%*|8n;g} LŐ^FRxdSHԔ1тFQ4`pHA/ZoDi^Y#8zSk\4Nj4g@\u ZPRVFWEfA٭tA«:lSkٸLynםA5k]w|gNn}nAX-r8`[ 3gr>fb|m|Z"]*8rFCtqB:( `J9^VGoj=<$EDA"6ʌ ;6`# F 5T!1w'>fiYbWe_,5s:X>cl>^wZؓPʚXM]SZ+ݵݚ&v>}!o)`V{W@sblƍ c0k g~vTYQKy.KbÒ,@ 0?h4M*>x%-gU (uhaNj0ȑ?`:3=k0n=-fYcoZ?HJy|&b(Cg)0k ]y K >寽f~1r*T{,2*Ƃ(=G>~{YQ,&˺.]LޏDTbfcHƢ&BG ZY C֑NF$\.%gV "aHo*E<뇱eߠۂa@UiC(XK %-w'mIP,cc Voslbj!Rר[ B`v6?aNS!mnZT5uצ90=YQoRYS=k).x>Gd+83"fl^Zp$+< rpǡxRij%7t\ubyLCQ F0be1X tj3?g7tNtsNnkuK_%RBeU 9Ͽi5BV,l\*xˋ8sdG'//p7'oN4?`fRoC<@s=GvoٺEޢiM7tvwiWv-jkg6wK[3٫WGdвҚJ_h-] _W3_ED#zݭ{(UZ2uznōekN>~T(i'3QL4̃N7a`41't>6,D^?1q[qM'z#R % en8[)QeJLaM5*o{~g&.Q~ݦrFTlYDvAyk"ϕN f!mv%_ aU_I-{H’&&~V`;=Na&8M :7P;M\@c#p 8~ 逇}7'.|A90++%ZpM%X. ;G% QQ8.!p)E:1 il#c-2e:%V<%Py56(;enNKcT` 9a3ࣗJ*b)7{rIoo;fй+2Y@*@]PǼUL!g-KĜ Ks>=0c044ˀ$'`^06xmA3V,qp  Y`gQRHN_'zMgN:WzM:#pBt7N`?lO=`hRbz#"ᯠ?+!kDI{EЃsYg'e \t8X؄3EBݭ4IfcB k/%qJ7|%Ŝ(aH6MgnӀLywf[ zS;Rn^ iH_|ud8Α ^(#K6>:`UBk&)Eik)M0ok} >|ۛkl&* 3z6=7OHJx4?< a~C+ur;fF}gl^Ԫ%aە"ۣM @tGp#O.l1T)b+t@ <!nR@S)L :oGawc_ to>B͠^Ҏ@PG;YvmZWۦwiQ~(OZݾj|f##)&~H֙#(XԼWy60UzIPkAfn1%CFUz0ƤYֹIDom9sOs~܁d kCx򼧘=HMv6YH>Hٲ PKލv;omVq3vs FPk<_qtl$ba:xD`8'Lq1sJ@;O9Al=ՙ<(Ҩ U 3I"((*m\u0ٱ/<\7n/EК<^YEbĂS" 1BL +X Z{~/l zٴ³] ;E]{vAHvd/u!^U0x8*^H5x#;c&äрisw: y[7kg7}&BqtN5QKOuZʥ17y$0)˦9#҅©fd-׺훵TZ-OM[ᵊ>D3BhSu/{<'hd>-Q9}.WKiPK?&u_ G";.O8oVwMjSsv-KJAɰz0]Xrt:Lsaxf&ͤ}L秪ꃺӼ7` p-=4d CUqreހN>iO !iu3aMJ%h>I$ }Ay!ߏl9=rR:YE:%?jULt8&̆W v]B$a܌r] e }6ctx8UճdQ5/emCI0>G #|;d4 ~'|ٳ/A-f @ jOIAcBI@xOkZQyRԶŖw^֟Ni+WHШ;ξ ٥]kg-< JbA0jIH/dB>t'y=d[ϭyPSS~,ɷO1Q`@IxT |{3Jm޼ȯwS S)8zdZt=ZwuСZQr&}#gbqӖjkNˣounBLgՐjZm5;N`@B!snffų|Xb9ΙƤu}~t!257g&ȋ f,%4FC`{gYE*hi􀿁s'1p0Y4FhQAgږȑ8uH H8bfaw^6bcWjI#//2eIłXjU*JRFlɫUQgg爦qHY`I6,95F\3.o+Dm(35$J쑏g/$_oמ|ӑ{%6e^h}Wv혽Ϛݔ}lc?0"f"fv^FϯUl3^28YwR%AI&H`T= Y= I,NH4l,%șɆtFd &!U@6U9l0!ᐳs]Q1Uy:Yu]jb<7oB]2Dd_[#?~6[lgut&uFdHGhX LQMBَBt T¤) utg6Zm5~zK={ Ţ#%J9xͬࡘ3"cV &3zLs{jqov)}Nzd\|ߖlLJ'ǧpn63,#SR߮O ?GpmSD2zd!f[2k);{Di0ζ֚HChu6HmH51*H਋PYqd`b= U]Qj /`zȋ3DcS7>6]ˢ'Ҫ.O""󥞖 뷶e{=.L՟e%EUgW3^_aqz#}7 wB[VmЄ$ZF!pպR;{X=b5 4m}azn^v_O"g[}zy7'sug.Cz#8n]/R!1O}vO'_yc#BcYC7=7?o~A̪~ACk4 .~VՉC^GZ?FXaT*Ubգ`ȑM00^ \P w·YdtV VUZB5cQϖֶռ8 ʞR*wGظB vڤUB(Q@%4QW cf-i(zx`Tkm+#T??tTdlSƎ |[]]&BQr~I_)c)]b1D "xDNu.9$BS 4*3Zv5j9XR+$GC\+WkaBRc3hpgh\8e˅42̅'o\}[S[/rV*?~puuէ/ʌ[!誽2BL(cZÑVO6kW-rlc_Gelæ<셬QhX,Fi7nRTa GUgJs]\tPV8kۺʦQu!E&@K /lLq5duќG( \)$Nuq6~1y>\8#'cDzcӭnfęwY&F"RV]5;lڳŷu+@Td63YfPʺV:0١gbbA$$F0\ٱ,yF\Qx~h9ѬX^yg^yqgL:Z4GMU2θIFCtxamh]G^yqŋSѬX>#?O@a#ft wg &tڏYCk!֟|>,#z_ d P%,/添-o}xDŎv8 b:hx!8,(Xj."vKEz71bMA(Vوk"1ph &D#ˤl**㘀kHDYހ|bJ~WC^L.DOibigz@#R7 Ƙ88uhBI*Ԑ#WOd.}P-fhuuu7j퓞%O:b'[}}hv?ͭūkeL"4CWg% $W߶Eҿm띍_Xli5tm3ŕsʵh]%crT*'DeT$5F PDVV "V 98FSݘw48/\}BS ǭNm7>܋eQsvG{y޿t{L-~< UY' =%oR6UЊܾ޲\2ĬlߋYbIb[8YY7^uпJc ȬH sp_܉&l_UyMΧCi1Ƈp{wnvt|u4Z4P;pcvu l pI8b>6>6>L]ҖQ+C{/cq\-(kkM^GM 򆍧&X2I6G1Lҹ9O܂N.ᶠ;A{]30"h `FKh5h6Zo2Jp1]؟kCx:%]t3]E]AkU\/t%hqSr+mS=UЕe^誡E3uj(-tutePY; 6̽ЕuNW f. `OJkG]5{JPntu9tEF)UolgnF8ʔ]ǝN} ^kջayjy%/g;4dv ߳ސSohѫhZtۏi4~Kf_?~{texu7>pJԛ߮{yP(wA溼7w7c k߱e0 ~^DZm9{wm~Ϋ8UQyi4^̎gzZUˬТPygUGUONTS+6TJzҲ#f@bztDW .C7qF -N>,%ҕmO;ayjp khXJ"]=_,+NW8wap=/]֞< @W~c_=8鈮Е%vЕuNWf@ǎJ;RtCW -h:]58%ҕq ]9 0S7t%p^誡vtPntu9t JێJkЕsL9UWW!tutEFmFL.D;e?,u[m<{9E ۫zPrjci>I@8)ơ4*Y*^c{:ϯdo ?j`5!cl?Н6Xw=@ "E/tL+s̋^M7֯s/F#Mg%ߋв&(Q]f=< fTCW .vCW |xTC~ +] `weez骡%ҕWJ90BMJZMunќ^=Jw~Rs<9o4hyՁ(7[@W3]AY6#t7t ]5gJ;%LWHW[1ѕs R/tz3u` +:cGt%]7tZ셮ZS+A :M7t%pIuCW UC9~R3]]]xeu/t%h JPFLWDW6 tr}r+`?^m}ܷ%|?-noo9Ӥ-՚\OΏݨv&ZvSWҩy'Մx>Z~e ] ZxtPꙮ.X3aOa@ ] Z}ꅰӕ4tutƑc׿byH i`.)!q3VI3A{[r[c$\3]5%uWw{U !Dt3t5 ] ~SiY޾2$ xY<|y(ue 'z[ v ] BWBRDWGHWNcj\{DVJ':B"rfۡ7V hո@iDWGHW qCtOϜ {D+at5Pzs#+!G2okxHU3-–T%fhz z<ޝhiZ%X-y6̭nKZ-P{nΖ<~+͋5+jhp*|˽ >}wSDZzi2P'BZէƫh-)Kt]bj'9bsDlo M3MOyc-}rGo^^~bo޾U~o_keﰜ]Xߝh{kw!R/S,ͳOϹ63tkF*r8љ&|˛_//O)r燞 7dA7cWff7_{'o?-u9F ՘ՀVjeY;] z;F *tnV hjtkHWQnj!۹a ެҝnЕ{лW;|YpxjZw W7܉;֋]T,fju~+t5龝J ':Br>+7CWn[+vt5PI]#]힀7DWl~<^?6rm Nt<{`1W73w5v(=ҕotW)>cϬw6򊭲(:yWv =q1N{mfoa!5fĀ+51՛2jQM̖. n[+C➇rm>oBW>`t~py3WC=Zfʵ=DW ].lJle+tlVOW%#]n6,ruhyt5PJCOL~oysWʁ5|qk1oۇhU8BJQ~krlX-ݹ҇To~8{™H!ۯ }v )%t_>wp[d_ܶE qv+Uu{:QͧTT O(_VgtD3p@ϋ|}u9p>+#3 q ϑϏqק5W@Ym?}Y>į`x6998PЛ@5.[͑Ln6ڢ5$&bgw}o 9DŽ UkWWM7ow? .'u=bL]kz䘄\f *1{}t9e1{.T j'>\]:j{S55Άζk 빓Zٷn*D6V,*ɥn Z7#́ K.UZ027Ќ}ԫM=H)JVN׵(F⹁\.߽=$T,.RjvxKd+y,fL9"{ d0v"1^3[.\L Q_׾#m[V[hM1;cl֣ ڐɏ[N #D㒳9w  G x !FV {;+ڬR ߙ>*D#n@#Mgtv"&C*!d3<%cN!gEXJ}j9 *֛;ֽϙRRm)UAr=ڜ`oK u8Ò|XMQ[lj)(%GYGw( }2FXKq>ʒ`VCI v{KFC&9:Ch*OE>Yh],T 6է` F): j{k, sS"0f+h` :`ўvPcvR׆R :%@ iR(e+XGeحB@fU5Ttgc(Q@HseԸIp5ϋ,RX6[goaH#ٹ+(ޛ0I]BFU]%ѷTMVQ[ьz˙]<"$Y*KN#b C$a]hޏ& \+ =jFФi!K|$ ՘ܡCŌ41c984p U86oʐvH'ka %}&N;OLpV.~iWgcZ,}8tŸ^tZ|/w ڦB&a-A7Ayxl9@G1+E$=$:\e ,#x.Ne1Z|Oy L}$k(@R[ q;*B@8KcEՏL?/XEމi1gk ) Fc*L`Ed}Jp $cv ԁn!z1^!BkڝC(jFI(˽egx3 2Б5+B܌9v`QmEIb&1QBUqwQ8"VqUg ¢2,lI1#l̦(Z"?@v̐.ڳ64M2YPYICx.e<#VVƥjx/GO},edԿEn$cl` H#/^q6 dG `f\ڪВ4OR}7 ,>].^_o{Ll<٣;tqԂ=*-zԡ8Cӿ3໓)YPf1cKnMōk)DzI0ɢd5AYY닄 ö=Xms8nHJxKTCrsE5* v#"C[Շ\`=-r%5ASA;81J̨sC)w0e02X*'ۉ+ED-fS[On餑cwE'y/a5SW(cD$΍>jX{Ebt T],k~x!mLv21(c@s栍+QAfJm\xETAlԞKk&b2BF2BҐU@ה=dQ痝 vSi\x*[3Wqd-pڠ@4XA VQ`ALBN3^Ln\2črv%#p]{:UXO (8q˱ %׆.Iv[1x@Ap&N1jʅUn4 X%Tlbb ) Ӕf E)ǸCHg Bfd1Z|:Tf7f}t3],9|!h>s Np63-]٨pQ3?~[uAV~*u]Y'A%xLh -wj,\.YW5L\m=\DJUdW ƃjJYv*dW'hWZ*7U}u F<=ah[o.KA cz1t($:F7ƆB^y3I/܌WѸ+(`&9N?v}r:y5g@@w*`: M2NT(@1Jk颤w -\ďRՇGژmTcՇd|C?MZ |`nty "uZ*vG/N-k _ `66H6'}3k봪)@TM\ok&@b4a.JȮ@2ʵBOoWR3KvuveBiO, cD5vrŮp}+P=Dvu:v圖 []\W2mcdWhWYȮy8 rfQP+LwRю?]{V>ci'nW{ 6v\qdp?ǮSz6w+MvЪ^q_]`!|5vrŮP}+TiȮNѮp"Y]1t(Z *wBF]]n/͊ {W]\|-vj]JiȮNЮ4֘ ;ƪ+˫IQ}2*%%'iWR @d"Pg,;=mڝNqYC p/5˶8jҾyߙ{|U0e(PPoEEfMmTb:1W9{޼/(͛ 4~T cQ_sh47-U]G?5YM> oRh2mnxRQM4\ GUSiy]pS>[S`E4JxUPp/2L* p\z+e-9UPv`f5WdW []\] 5N+ V~*aWUϹc&\j *S]Z+nW3+O7 ֮BVbW̑]]I)BWcW WV]c-dO!:E\WdW($^r%ӵB}j'!vWdW XzA몱+PYAT)8)ڕ4?O;Fdg*ϬۉQn`=0t6'=YQ4=gjnܣZiMJe(8hJ%Ȯ@ܸGFbW>A[ %ؕ0W]`UOtr+Tk{?*=՜]y4)D j ՚߸GǮ=ޞ1%=Ƚ[~j?S R̮veɮZ\iC7x]`[]\_]ZD UrIvuv%cUdW +Y]\j+Ty Tn/%:잹`%+x-vju+Ti8 ڕG g+|u'BݮPS+-׺03`b(0~&:sڊr`|f5F90Z"I.}B.mؚSx+\} 'ALZIMȳW;U[sjу|n^}'jIaQywv欆jקˌIk؎x\f;^o.6vMa._?_~ a"{j?<dy7w!5AZ(5~s<͙6u/b8r6#x|HFxU 9_{׫Uټ杗ܵW߰}2e{13%^P]h'R:6O*U_;Q^ys45yp)0T9%љ{&/ j[ZVHU v#r|`s p^Pf߅yir>Z: &gYiE4ɇl\pQJȆr[ 4[YV+#SOq'S,$ͼZř**k:YTLr]l1m`e)l—oPxY[> tnN3 ںkCܖ({\ /X2s6֋C  v~|pF "F}c;C:H$u 曹Zj XK Fq:Zdc=ceF12/^n>ý8ƝAro姶=1s_^@tTiw{WgyߟKc/& 'aLSJVSD)8ċݠ+zv T6\&Fz<,e}%bIraLJ*yUNp6-A'W> !n4\jdOe1+d1&FDŽr:^qxӁ 3mL/௳\fwN =Wwɳofsz=wu{JlSw3I|ha\Mmِ6Gp}sJ=(+CLLCqy! bb1*uYRL:r a.vY]hOZ8vǶh~ w6_Vu+MƹX)yL `Dȸp;cwUn~L?L+TC1+swǕ_rW~7֟;sa%b~~{b0Һ;Bp9jSPMUdU |t]wm<č5ýM@i_[۲v[w ]{HH7 #uz>\` 7BlLkx/beJX $X,61Dx-2^)zD%//zԭGaaǷT:K\11"L%3-xVK#c9q4DҞ+AoHR$}(ZGZ[MZN%wΥVhaer09ߟ҅z4;j{;͂7shoJ,ݧw9\Ä\4KhF@ )9R#1_s'Bpw ];>@ÒɬX :"6t\+4:OrN~o+tŞhڃ> ,2H+ׂe3`\)dkx|2g[L) 7)1* .Y\NL=Ε 9Y/'پ>(Yn|bjKwv{[-E~w`q.Fm 0F֋(ty!ZdhcZdr)|v|Tޜ{ނ>]Gڢ4sZpW@K)̑RmF%cYٛ?a;۶Z0zH` :Ʀhϑ%pMf$8l[2Go1D)I%W1)ŬV"f%| I!b1hU!HmLñ7% 83ӷ; :{tL$y,r,( rF@BދiѫGt7HoIXSZIYitu\:t"Vx {UFKuVޖIYnO^nY;뽶3ƘcҀjxr0 ;PQ%}:;nd{Gb{]0Hي9%{BʴF;-Dyk*Tme̲I%8iu1 K-pGZ@1,2 "M8m_#/ݵAϋx/vȾ)BgHփM3RJw|}mx.ӀSLr -%& }.X=E _^"׎l1"2P)〰 g1 [iDK3y!+XH>%.z4^J qJ7t%Ŝ(a㚎9c~]zyDdP9Obo^Ҏ; τ~و7(Α ^(#KdUa7՞c$elQZ"g &2o#[W)~50+(v%|7ߤƝQqEKϫWT&M"G!gi%ł}T1VQ&U 7&Rl с#WUD\v1mgWZ7C\lzRYB }:Z`t,( `JËXhE)[/p!z(;yIDhm 4wFm6*G@,E|'91r8 Xg^=Zj{l ^(4 zEY>f)1FrZp5r¹`>xԏM8{ɱ6$Qs-9˝R 3/bAYڏg~6)3%Ey *8C!f0'68{ojRz\Tkne7_珳*\,/'!0 nC"AG0ȦP/폟APda 9߽׽|59e5s׆ؐـ>?pmY9_FW7n-z HEZ#~KsKUW *>XIcs34'iT>:{;9\?ʼij7E b21PAv1SDtbv9Sz^b9ڔ]"M #"XfW9sն^nk>ls_3m`4(B:R Q Rajg"H#y ;i-_N V0ܟ`@VsԘY-"A o-XK)4EaV:zmu"%|uJ)miHޝl [Z!;f*4:?G+x:%$ PR l-9 Sic\@PZӐ'=]ѵ Qh½"mqa@B :Zk[o20}ТgM#Fiͮg'* &xb& i>q1Ao4Jx !o᝱\ˈah@HɴuiL4V]Ҷuѳ*rHN}C_ptNQMZ1ޗ~$0˦Lp )\oZxL({OK_;W*ͬBU-y[rr *VjmϧEO2^JA-Zgb mvx%a\ ~JIS:kSy 'KrAQzc'l&~~џKR=?|O7w`ϖ Β'Uu'vܹ뇼:.+EBy[7}[ESRץi JJx7N@u{p<^NOp9< r07-6lrXFt09u#x<I rb(K$|1 Tώ Dfdן?N8t~ϧ&l7A-f  jְAcBI kYrQ)EMч_vϷi=| \QɥϳFW~\&s3o`roUJcR,&!=U ?ɫސ:~Pc39u8 ;=:>J"1'grrW?דTg%_iJ01';&ROx7PTKo &״Zgw#/AU<y@Rm:o`ɔj,6Zfк `ʨ@ ūW\/jeoYh=ا7P~dUx ϊ)ge“Wfna*͛p6߉` ~;> d6w#{^By@Q.,heQ,Rx/%>+vrx=OE`xvW PCny[NbZ~{@E= 0w3\-~^cw`EKQQ ߙƃI)|;AvuVZf;\AePr܅u ՘g)qV<~W/G}39gr̾6U:N3mR! <ݻѻٞݷoz7# {vc﹣*paR֠E&Z+ʌ;E6|S-R[cnaguzԡvNbJgXH*) Vn TB #C lڇZn`ȟQOfzK{@Gm %t\YdOP\SX̵Q*g0er+X0 VqSw@oѝ;yJH4HAI*<`5JJBZAur<09[XE/ 'B|JU9*޳V@q ^y8Ϛnwm6nn-:zt@@c3tdspTF(/c\T4VcHmp0@%-:2%=n &yktH!ݛ IUp;ox6"1qV{tHbaJtowLV@z(V'}BKykLJ$mm÷PZ6v6Ca#'|jùd Vr3!gҒpsMPJQh1܎F=A9܃_{A04L/bl1V(Rff¸谴RÐNAa2(:o6E^30{-#DMFSX=CEt=eyx=V9 [1o5QHFE*5msjj޾hZi{)#B&X>EōqCAO1M*$%ߪ=(Ȝ 'itwՙG3 *+/+.B,\|(q*. f.A޼ѾWPީpV5J>,VwϞ-d<Džz.+҇餶8Z˝[Wjdt`OQJ~KX쀞{s}RPş7wrݸ@TPſmXxBB,[ !ؽ\"zÝ:0G{H]vۺ]E*H+\vu*I apE`BgF?l#<+%;j ]] ݦ䧽f^=H2ӌhΠ1So"Qp͒'A%s&;Ae*Oh@!l8,)c$07?0v^kbB 욂󸢦wr]iv;Ԇ,{I׳56l.l{[{ʚdih6V&%gTZIi|f!Q >:N!^@9DdN(3*s{є1l)@ RHֳR49 ST:JpHifl ؜6&rl Eυ~:DxyX.ǣ/ 3vp4LΩANfōNd+UdT"镉Ш667,s= i+JDl4Քf*5LÌO'b}Y[5ڪgނ}NQ;f'ې5OY0p3Qũ1v$,vIx `$4pML$( G(|2NuҷO}26q>,v~$رMǾfD3bψ'Y(A*3Q=Z,ƭxC)&Y61TSTRT҄4ϚN| kpyw'TZׯ~&}6#̆oTt条+k;sxU%=]il9(᭫ nLWЪn(u@W}j8]!`-:CWt(u-R)!+th5m;]!ʶ+ѤCt-up ]!Zz Q޺:F2:DWXWuƺB¶Jr-j`mE=͕>KQ. /k 7EU G &D%>+?yb0.zs |FʥTCNJ+޿JjyR\e;I)탹D?^3$|TX8$mpNSY(jkO'i5{2`LUz~Ms[L_P[|6J0}'߱;J2/8b.-wT; 99;J)MR!EZJR)E'WrTJKCZַͣWdyf|iHQ` =rոj-5}᛿R"N5фKܦU\$hxyjvCZx:àn7햲5kKU;y*Eێ|I~RkJQFL v'bhm{ QG0bv4DU+thj;]JJc+CCt | eYhe Qҕe.98t(k/{8]13rJDv|;=T;=jQveed{tS+,]+Dy QcF.V\u0]+DX Ql`OW_8'Vݱ]+D{QZҕ2!B;+;B #+eBXլ+th;JrE(X΍ -3dWR͌FMwiRD- jItzwMɲúO$ԋ7A\ط#'ߺC~j=I4f 28OuWi!5תv@ŷ!KuL-ˋ?U'3d>UѼ4D^\W=2Ow8yX^IR1_?ns]|]8}~,xTUd71C8r&P"en^bTs9^6?gS+X^\/Vn~%odC.~{z3L{;_SCa?$q-h՝ʺR]j䶮)Ftxv1*qB[%*|8YJ@ocɂ3ʬ~s8j]o3Z_s XnΦᢸ 5?wLars.E:kk* (#cBfkô҄gvLrRN\M< ~vbyi^.ؽ"4!o_|r2^<*Teú} \SyUn[5座L2/v($ª7EŨR`jqant>`*_o_:fac{.C-;&SJׯJջmt3˺/APz`>Ю-Ec+㏷'oo00wrӉW~`g)W4~P6tJ'pO/eUp*:_,.SҡՂKN/P2NJy6h(/>IbKjQ<-[+A ˖^:w`>2Ou1o(ܥCAz) ߍ`āZ5MZN4eJ֮CACj>Ѓ&dxdw,ޱR?@qђ3EyrRRݯJ3BuAZLzeA=&"eN 1j2LU ቚHT ƹLF~VWYm]·|d[oʗpcUr/z)K{7hbq7囿hL5÷QKhR9fݽf P"<.Ԃb$ lRBq,qr̍mt@4̫`]Txm4DJrL sS5YIt)2 ]cp*.Q\#L]-HF-Mc9gGn5V8=)l~M?f'e+s!]QzgN׹5Di69)BrRڱrxBV Q>piԚ(c$raqs\#Vq'9ZEú-=&H4h_:5aNx#)n'QJStR/>)`lk nPc'ITHIm|&@Nyô˜ x0s2| V>@B^Q`k&KX<˜+mL2F*H``P]D(F Xt O_J;"+HoA}8@@cLmOqS]<9M"~q=έCb϶aI~jGcR ՅIDsJ>K,͙`SRy "djp7~̲ia4q+㧼W^\Ϯ6-͝u9O~az9%4Odr_ +mH [`G"M$/1>%Fp(;vU )J")J<ލN骮W1zW7ׯM~7g_]/ߣfzt/ZAZs^q޼l;Ii$-JJp:',,YX b%,xE, gn^¯N  bq 2Ѓ\^Y)\qOwٙ5EyA$8P IVRsC Y6.i‹Ot*DO界܇ JJ@ _DBĜM<;X3xCm@ 7ѸE{F4fx9X A(f;勈5KADM\F1#Hqg)zNBʀN]L{3Knڹǥ֌p艑Eq A(ENa\Դ`Lᴦ(ku ^:-!46$N&At:! eim:%iҿ\bA^  D,MsTRx'4UV3&#}ap p<5`K)b~ZYQfSG}4= qbcX3a9-9\Tƒ$B3܎{yym_4)%x4SUR o(F%# br:gO8.a= *=D*фKd/ Baވ`PƣL"heAtivpHj2(EƂd 3I -1 &&LDHLGH.y-ܸ&3|Q$}H [ . jz}^ Ι-Ln̦gYY-8{yΒG6֗=^UyєtQ',gq{ީ|gqre#HԕўBmj8F9,FE drŽDI1lYs7tTΑő)gxE x7ZOHt6#P(OW2 p߆~PE'|xO|'(EXYVLjQWߚ*.ϧǫTl&ߩ fYS>lp)F1P{WԴͳi LT?O|u1?m>x*X?4{ޮGzyLrsGq 4Rڳv$G:4 8b0 mIşp.[|2}Zlty9|8߲86Q4ꆵ*Ql܏  <>|IeqLhoR[rSEՖN%wugqS<@v47jo|ez׃77?pgMS`{݇8bhC 檛 ˸%o5^nuʭ.ju"xdG孉0 \]C/,d~^G ˞ofk˳CB4{咏l*KB!QtZq~1=ġgP#`W@\LU ymL>1k^Ȏc%%/ >6 tV{]}޸"!a"F֥'XipG Vi~]㦄8T4yL*^'@5A)156|SBsmaz6B };X6pZ7ϥP™<;?^VA5=Hj t*sZyS"u!se|v SBeGUcRj/W~Wb^Es%Mh%/:dy6WuQNzK>q)plBPgQ+kPz"+P%Z3+a`E"wϷp weo-#|ˮ\qoJ;D*FrTH|aԂdWe]ދBK(ϔQuӞРѠРkj(+s 9A2 J$!MNK-ͥ=:W>\ե 1Ѻz9PYMT:oB *RAT<$&PFtLڨaxAFB<` )s~#1t qF(*( (CJ::UH:,I%)QRA;j;17|4?(ӻa`w{xM 0#|Z4<7w#3T~,A%Ţ& yPRDnćc<-1l~:ֳtʼn<IXM$-DH[JSK($A*>&oYc?{­qĜSvD%!".2ha"%&&Xj@:먾=?H!XhGCnԽ5}0=&ْ4>(ʉ?sLR`P;;A-NYP'#QV)0Sgcg;g9?[^B_Û&e$'?Iwcilu nza\.4 UN9LHfTmZ nݲؿ0d/Fw4Q9vVYԀĶt cZTJԲ`7P9J "{縤 @@Ц$^ěKA~6~|mG6c~|';>?^7Gpoj[}0CW"!S(.z'Ǎwߢ ޾AH[RוT*J|t8ȉiJs#KA~_ ONd6@r:\x2Ε~^]Jl8:ŵLgOFy7quql) 0]镉RrĨ:9x*yu(Տ5ޤm}3c Gc8_?oHbz^Qb +rB+c#Kp &C*e~1R"cVm Lki!J:b&Yfka@7hdqq,p\? !> {7/<^\"8ͭ3K;uO+?p\r@AHF)cs(+on8_eK$x9z.V=ʱ\v,RDŚS8$3|׳K,Ȓ,b;=ݿBX 8$NI>95߀l')sq j j*F#4NScp@PHY"!(O'cg7r,q/dRqy_uOBNRx )}^%ۀDDEM:hf(1Y2R0Y(rk*QQ$߂h>BDth)EYP{#gxTK$ertmG:|k<9";2 zRu3mY'*Q,J6FD 1* M@ h_|̡.XNޣ\}C`Vh+2JR9M@ԥ>;PcƠSbTrKFY*aao)Xߋ15Tg <^0 wZP@ 5C / &D%L{#b~`s@<Qݬ&5Fj<6*8}ǦzFDu@"T#!OGTdb 5^z=M[Q8M 18~j4%D9 $( X8+k2J"GKJ٭EjDrC\K6E3..p$BCc) ʴ(".**54!Pg"P۝#cg_\<;6C3 lE]#Wz|uEF!ZW:Mgu'l9`>[ˈzQR۶ێJ~E4P.*$HYyR}*I_&?Y y]$⵻$zaƌ^Ų|kW]q-ICQu1'͕+mh_T{q3OF|Opoȼ.X$⥢Hm@n,.ɂԋhT^{nJxg}ʊJc.(]Զtr4);mo~/Q&mмL"24,2Q"1RjіIg-Z(&7nLyu7i␙޼rgr|{*0tk5A%OikNԛ햛wnc9)l,A9VZ!CiZ( -Ov=K듧9-XFNj }`x?|ZߟvV.5gi}o$IKƠ"rdt]Rrney L6GYEx>66kŦlo=vj6ȭ wh,`GwE佤՝d3uͦ |Al.{^Sռ;h.?䫛b>ya2O ߯k_+\K z4 lzW׶G봰Pϟ XR>8 B{ŋ n&7+ľDfj5(LR !ˉIHp3"2L \ejURK+T8%p n4\6'6Yi Q+l_O/LWϧ_5jҒLGZIdSL} C5{ә\ әցLaZ⻿J Iڋ7mvCx> GP'Hb<]2h||z._ߚX>ˣ4 L98괤^|/bqpq?5:KBK;M^P/߾FLT Ҋ(Ke^'Kht]L:7,>BH࡜M.i+6E0=6lk[0 sebI(Pzra`V <nY *nԲ:=BG ,Ut_lLyTTl/vQRSOxH0bo*RpQT.:՗WZ1J/,U&W}L]+Qz/F\6Qo/G}͟N{Xz%yqV9ϵt,3T#%N5Z΅ᔉi)%P7$Iچ!D6t>ζM*%.qL {pWiOn'oD#&ų=|.Z+4fJugJ-W>k/YwdKuz*QkD;Mguz73ﭣ}FǴBEHuВ4Zj í_u47m9X3%#j''J3U,$m@x~- B\d P J *C!㾏Xz6;%RG#k0@*XݥkbXP9?fk> IM-RsaNk?=^|kٵtÕP9nN8sR|cѤhWb vp-+'ɺ!𮮏SQjLs:4{j4r aPN9z$Gv:nxy4!š)kqo7ZOHt6#P(&W2K\?kҀ2QT[wc"C.c,.l"k{\xv~qf}IW Nx6!ߞ<䷚$:}Cn<9RjzW\SswQߞES}Mu|tY1[4P-7Wk)FYUo8z ts$wt5 P92E q4)j*=GvWc{&gX?j;ɮQ;檈J/F<@xREΧ"1=Ÿ.o=TQuC%/rpt7_߾:?9{3} w? ̢ϥH`ۓVpCk"k͂3s~{ǸWK]r+]Wz0cmZ35TvMYp= Ehd/RU}󈊪ݠ2xϟ B4{ C*ߚq{ăl$vF"9 @͚D%PE"n@DTj͔2yi}B^YIϿwpnxoF%M΅$ .Q&YL2)%V}p˙4a7ذ8v'nlۜ| @ߍIZ|wU|Tc%R)^vMBOP1@C=-6(D#I+e~T{H 6 X!SbD<߯zfH0Քh{Gg͚rT@ dZ `٭!&Nǧ<{vCmqPgV%|z"].JUdm2xϢRAÓYASƆq2MHQiH-Q2ƽ [d旰vOSMv0;0(Tu!Q[c2%1g)K#>w rJp88*08JBT4xqqamEJtЈ Y+>X8u(hg Q$B%Oq (.!*!DIBQȀ9=0dg bN?BD!ʹdHfEy:Ƨ 0,&ΞSg}JoNG ;d:mxVlcFhk*{3lڬ$}Nn/iNl "&/A0cpm΅!)ɝII#G 1+*("#4R9XV[\.iF,U9KnqQ%Ol\L+QuJ[x69,}1'|2z,fB%P8{T˧mO"bZrYn0݌M޻{YovZt8>ڜT9dWqJX!qkM'69'48]`wItOZ#U!2l#dk}|:e\a4dj@w3.7vyCv{9}~ӫo/n0Xi;>za>`|F77=wxcp7,K?ҵZ!g͋a_FL[ҎR*9TQ[8ߑ]a*>KԸ(*^T,X"FytNG2ÉL k~>sHsG # J$!)NK-GXNNY;8QD)}1"3Cՠ>x|n hh'qg4t?0,JQP֚pBQ0il ͂u|GAՀ:V}UNf׋uZlX!]B})* naO_*yv7_ FR&l0r7A[' s,(Ǔ(3[AU&Ju 9{|Sאĝ&e$7Ig}il zi \a9R.57^8x2将͊+Oi>n CLfb,{[?VmϧeO*A-{}~gz mwz0%m8)]# p乜/kSO'KvA~s~.yڜ&]Nߚz>5ׯ5n7Ӌ(jzc|__jcpLXYg?낢~5E/i}+dpqs$(Cƣԛ] &u'M,x>an'xyHi(׷`&(%UE"kfؼ]fצt71r4.~w5 B͛; @lEr8 VZS!zyP_po\z_g wbí_~~mn\,WEK4f޷7Eo- f~Bo,ӼoTr5N 'X; еCoMoɷ(?R:! mf;oq> ]EjT7=_2203GUOWxHԪ+wF}9ޓ.&h,P"!Wn0Zd90@m{rGg&x35D.=|BoNYS̊1 QYwjo϶o]T'Ut8j@T+m* ON9e ߞ枢Tפy`Oꢩ_(ZPth/C}Sy[ Dt^TI]i? .XRq*O")ePs )16FCfeb%hHR,g;$02^ *TG#u` `A R.@%ATgdAVg݈z{N 6s˳Xu4YVILW_J~?1x5QT%~/LQyx+AWOD䣈uD8DN$Q, .Zꥠ!`"(V\#QR2xa`}bkmu0d<=i)+7:wŕ=)YN%uN+T W9PGu 2>l@( $eXHWúޅ"KVQU-BV `Y_Jh^%e|%ż DTu@D墶]S\:/\G;JHDYkHdum)H9g-Z(#䦬vv/, ɻw<ϬmtEe\(;Kpw]\kEvn;z!v0bɎH<;)bя!:ƻճ U UͫxvM¡Q' ,WTMhto8_xQI}?3f z߽טDn@^Ba{Ǹ=Rt|l< ]ʛKU3J/TJp)pZExb*'y`NE)u_Fi-V>ro6JP z-XC#w&@S Rpc)tޞXm m7Ipؘ~oיl3߯JvV=vN3}!H8ջ毊٫": c2Tڨ"q#F`QV(;Pg^%"Df_)* s$LC4L)d~T; (ɥt`;S*Fi&r3/:C"e`T(MnpG>(&NĔ"͂L)yK(|SGDͧ%J6J$%B F~-ChSIM_#Ay} nyR}ك-o= T8tHxKx8O?|,[vgw&~ ikpG&/YTMǦ䆳%d8;rhM,]5Yٝ&\E1L-UӺ`nҭmYvjۆAy0;^;Ȼ'l;qѭxȳu掩xErXxmܯB|hKy>-:d;ܪkȖ6i[quXGO%rA4*, p_Μ\H'9Nr89: ap8%9Fx&Hx2>Tk:ܧණ ȿ>h-5s+ɓc`L1* :4 hC"&)w?e1Q+ E4e -@)E/_J 5^Lknqzt 93͓=gfqa9٧ ?;] 7"JQahjB|̅}\x%χ`Т(2JR9}KJgcƠSbTrKJ#c1qv#c9R ͌CPBu}Mfr474@>|y7 *;Ö-2Q3sj\Ir턅d R"N@5 *mRQ'aʞ pg6T^31YSb9푑DŽ #v1qv#Šy*]lvڲ0j;H-3g37>)&Ҿ`"Q!|l#GYzԐcyT5!/YdK$QxD ׊~2g7J"PƓᱥǡ #C)ӅR:L&SPKolEV%N*C $*ltM *繌85I*ʃ2θkmH /H]/sIX,OgdHʱ}U,R8Fe3dWTU`RIqIYIb5PRI.9_׈ueI/NKmu6%EӰ^4^-"-^$0ސ`@6Fd%w^3QTC"7\/E6C۰>=û s/~9kw m =A;Ѳ9blsNPGf_aA*e,Drtµ-U9 f&˨J3 =B ,萳k O+hyQ䁖JhKQpclq?^?^çۢpEfzzZ(i0^æ|2qOPmFŹ pƒk׉;@4(N1N$2fRkRB!xczg20-Me]&I1\R4B2Jt6k&z'Y<22餲c^zkng4  vcd*$6>p]\'/4+1b>EFe+Rt6H_vQ\ՌQM> BIƒ5;/2$3)ȥ iG"&R"h-OldɝH)B֞u3{7ԹDBloc߽d1(TKc O , |ERUd|4s92tMUHO}iu ֱ/(2orܺb+Dw7iiIiIb/tX c w/oEAG@,Oԍqu'ƸqfVA/d &l钑 zb*HBȲq X"Rf*cV2Ȣ^hl7FW_jy`ǂtī qh"z_]8|@ o928i@}n^P%F_F2`D[ʷ쩼v-`-ͣ\ P[2h_!:(1XV, e֨P`#(4tuKC S+zR.Otd>t Ӂzd >:y iPk{D`#4."KcB7.Tj5~QF}ՠ_l%daUC\Md]ɱud zqO1- c|1Ej]H+|،}U?u9\̊@!*` 2c*4}L.qU*P$g q 8}pʦ~Xk[Fو缴AԜnYtz7[ >xO{:'hR^`ReD*zMzMrrڿwg / Ih;a<.@&ʡ^=.`R5Gq!jgm*Wl<.hh~DF-F]r=uEJzqr匠SWG]IdnzA۶Ŝ;ZA5͐P 1y&A~ztvk$柑ϰ3(}Nd kͧwݨi(ٽ7qBp&Ɏ' 3h]у7f'n?_NF u2\90Nz],^BQ1tzш^} 4|"LR?r`SW#\?Sj!-= /z+J/_hmcTEi%Uiq)(j9@bi^*;B*s45hLBPag6xkA*Ԛ{ 3ٞ %H]hU!WcQWDPbv+c.er**vuU"!u[>z8a?\]=`C'x<\)FQWk+ΏXE1/)%] ~Qr1.^΢D6Y~T7s9u`&?xy[_1|!ʠJdfuĤO)z*NhG9jQFi~G?~w6Mi,'zzw 9D^l3mbfZ4Q3ߤ~ۻ5ݽ%Ö^.ήT_zz1*0[C6pWc/.fk fmW#pN>E\4xmIƑPj0b0vԲvUfyCB`X'Ze.?-zr3f?grVlUG]LrӨ檙Q,G2$p\m\ORo{EmΒ3ם/\xٗ˽];W?~~_9œWo~v LS(e6 O?EGpg_([ Z&g=_mMNyø?fkc[n-@o{^~Of -_ F+R][5"U #D62-̿_P77bBx[ &K=uY Y]1ma_kF^m$X3e>̢ Y%A4.@JD)'0Ne lv6,T^UK6:ZnW;Tqų1ˤ@N .PpDo9뤌'2ݙN#g:4=}7z1g`[{ʃ3t97ǜ|o&l}Vuջ޵}gq?a;ytӅxg`S `UPU]1#m ّg1L&A%i^hgdh* r b6JoR Yj.= "UɓNNѦRs#KDB JQN(9YPc<0ɜt&F 2Inyt!҅2.HPeFct$ɀapG0NX!,4wQgvOfsjf!`)%AdqVAf5J L=mq5 ms}#stC9ϽPLܟM;έnarӄD8aTTgBӿkkd[Xg;|2У==t7.jdP:FH`$hos9haExNYYdͦrTu,XkFl! te\bDv]h}c|#8+yv P9IN똷,Tγj߽Mϲپ|*! QF Yd^R> %سZUDC{vۈ΍oAv gS;O׳-_avʠ1kQ6km9;Fr#F.\1e<\4` ZZ%IU1HT!,DlwDti{-!:{!lw$eF< 4I蕣7bqoMLN#|t4qwd+mcu+t{^YAO w"?%qc)R@K/:#v0k BQ QqIVg>qMF%?(p$T90װ7F%x-jnr<yS cZ|?Gh 6EmYcWt,K o޲ܦvcm-wvHl#asɩ%TO/z׃zKYK9l]M QG2jOgsDPG0+c gZ{_u6ŗ.n77$X فXt%y<`[$[lydFQSl6<佌|~IO8edc e0pfds7 ەPz5ggCLzĶXʱ#lo=BD>-Im^T- jYCCh{0Իs" Dg~8h3L{& Y&% |?6me'K=Ya'zvxz~z9/ V/oH6Wƪ:.+>ٯU]ٯ|&N.idt2> KA~_ _ d< (kR^ze2~D#rHP˜< Eٳ#Q_^Nm[_S2E4%Ƀ'&RQ9Z$%h2GOQe+66^=lTBo/Q.gJ8g'WʯtRY3j jzOo뽎j7%J-rHLJkq_h)ꐲYGdCdȉ^;(#"pDEK4yLőJ!O h'FچY5񥫶t/ 7 q)//G/g5J $qJVQ9Fu 2>,@( $dnåqu5@/`2E"EWm.U-:+.(˧|]2"Tj !Gxɰ[ֹ**$TF ̊hXٯɫ~Ce1df,QZ.)zi9i "kM'6DB-ME ?ygLDd7tc5%/W؟I^{hE e|Xʋ G ~j]s򹩼^yx}>-nxQI}3f Q罄ò4VvIGcA~Ԝ qyk}iga\X8_UM/UZ8A<\x"|FVBS9 Z9̩;07k }`(A +H4; D)Q P)1Z:fw(7I/CCIWk׏0mm(} Yd2-*\zO*&fP27+PUVPj2DbtwßҢ.ٖ$WI%9 c- S ՎpJr)!XNTX'Ahc"1MP3 RHF%'zdb)(iA ,Ȕ:J^tN ღ O*@<"2M/>O)+l*=o G7]{p< pɎYYMOgSXgezZ( o'SP : 7:!ǾiZ;ՊcS~dƍg]&Goa SڝUoK_7!t%Ά큷-:d?ܪ󴽒 %{lhȧ DlI7a!> } :BN}\k҉"Ś{ZM}1m'o]h۰ȷu $*>@Yل%XǨ045 ]b>L;.XNMGC`VhQV)ɱR\ [g]4Ʊ`c)U1H*TMRXsv#c9R }PBɀuft@ng~>m@ep|/(,1C8y9$NXHF(U T &ElylwqaK%8 A15#kJ,k%|L0bknvQX11}Q[Fm١v`eQyh'cބk7|HdHYDPv$(aRYﴠrQ]YGX"ED_XģQXsvam/20 -5>EDUU"nYFKpLG21LAq/=M[Q8M 18в:"Unj85I*ʃ2θ༆(8%hI3(|zF:uE1:}qQEb[JKQPƉ-+RZ3@u) Ebdp<2 :;\lqXMa<4 `xuя( o/*xj\YReVNB\v0%%B gFϊ| "2BH(gyv>)kKDq-AMٔ|<"w=@:>I}veqͼSڍ*/.Ki*'ڲSe9V5B7.;z%v q^})2TsW59RHUNaS Kxe޼^T;mv(=5Nߘ(b1WbZLy*;~ #qQb%,^19R,*gv^?ā(3(/|Zы86>Yy8&W<g%P^Ek DJN$8YFa($yeC~sa=76p5 L$xG˹cJ*STŚtԇH[ȜkS^7f A/S/*D"a?iOʎHm:Ֆ,0rcT0bAq]r>Xb-g1 QX4KcILcEX "tDa=ʶl?Ϊ,|K'}o_8'䣣C19ьá4f{ߪvٕ(]&NUA⢲0+e^'e8݄knWbs}*s:tR{z AJS ĜÅTN#eF2 ,Z#)=&b IﮫIm3v,mD[~+"\=UbõRA0L?{Ƒ_!vˈ]/I9 iLZ쟿S)S"QENWw׻reά0r,>E"mp&69ɑcÛK[->rBۚ0d@LuD `Ѵj/ 8VZު@x۪ j'bbM{\j1]{4Өz5}=0FTА08%Qn<a'=pNlv=ms&8TfT2eƠd$!h%Tvj(;$wIȱҌe L ]+E֣J @ %j f%y4ҥȡ]clQL=.1W~/|ZacӂI/LcMG緷~;k[WoSEx{ k @pS S头#f1僐KdDv 76w/JLL\ZMBŦ4Ζ*!Qy┢IQY3#| Y`tR1/@rnl+J9 ݤ[QĜ0vEТCgWJZv2[F\To;73f\*ӝvVZΐq0y*)f;dh$9;#+4 N,ɼo>oq=뉲;c7~~nPd2 !T׿hc>vS^30grk3VJ VU?ɼ~i\X9ջ^^ݚYI"˘3N UmXd}e#̹DotlO/oueg;wg/MQKW'^`Zbۖ{m2IvxbeNXV 0W6]dv9sd). fS7f+Plk=@=vU])]}@TԢeWO]GĮ`hUWM@+e{ Q.caWZ-](l7îp˭Ǔ{,cG\ȇ-aWR1`WزN_l}l;t:th\Вn?+/OkŒ:^ɐ~T˹:N?Q WL;_a2D3:pmb]qBDd5/]":t/ az߭F歐pOl3g$ϟ" Ɇ4!,AAsM)AAaB}ޥ9&^9^ë"L9漣7{{|?Hhnn?}0_hM]{%aFrCZ@%HFkF@{9DK5ZIhi{/Pj{G ?a1ۯg CDP߆^{l"QdFK^WHY|l09b*0.}_%iL))FWUNa`yQtDVxJV> 2MjԖI8+دs& ڰ-7IH;A8fTP@`.yn:a#R(Pf,-}})??#n k֒tbԆwtExP&_6(k39mH[(Ƙadp&IhI x K[f4m|EA+<IRيO I`̗nr tJ-0ϑٔ91-!5KHZߵlWYy63HZgbFp!й&.ey乔ɯ/6(>{)|i4鼙,jPR7T!.-tZuWJٷ NK̡[9΅D3qQ, 4%8ֲ(tugLϑAaz 0d;yј'n}\O|i v'WKN2I|zԉ0(4:_QS~۹1ӝ}]TMXxq=]8[UXť7޻ \Hg0r= -ϑM#M֎rHBkYED1 5,Xp4_reor,<*gw:Q7U3Y:.ҜFQ헾0w t~ oR *‰?/ ;wuA7bG^~_?WρI LR p ?P{pk~~hM M͇zZY|quS0Ok].,5 /eoh347 lk/ìVfS G8Z{|ηQt9EmӛjU[􎒦xϒۧzJY"cL[:a; v$2^ιgnsi%H&@I6hx#ݖkV[d䔕QEf3i1 bD29+*\IV=HL1 a1pO 4,u}Q9Վ|&`^^0i1vSɸ{J$ a)*z*՘K_7!zz;`[x"82ZqI湶.@ʅQE-=+ZE4$%Fw} K6l6]|)2fۍZ-6po[aeGT,F%X`׈c)HJ&X`rc[, 4'u-L/򡻟]5'4L,UyS7T)dCa2„<0N+@W;Ҽ*DϹM$)4(Bmފ4BRхبˑ{Bar2&3&Ȩ ޙ[H`"S e|]Hdѽ>̀=z=]| xe\LY1fW.sXyIeA䢜K:v0k%GIXl}"*#Q:pI3'Nlā.*2ճq7: )eqhc8?66#u=#L1wgy-!>G@jV絘:xZ'zP :v*o7]_4=@\TkLd$lB#9a @`裱tNp9]WFL1`FE+=蘸#lwd72@u} "tm 5kST:/Ҡ3|H?:ӹmZw?~7  x|l2 )O,ɡ7M9'w:t0~7(.յB(gӍBLqEވ()J[T*J_Tmz:mӤU5ǃs͋=]SVz1AKA%ޅ(-3$kb, `*+@Aerm(!BPsmcǫW{u$EڷOGMOm6ђ4& G0+c  83Ir咉J "cQ1A8Be R HG'7QH:fB9p:`R Mo&I@,9}gC?-1ibNNoy=qkf lshO@aKC##ő6ЍG'h$9Gil(Z&e=YOy9G=Ig äAѥ̳B0օR ٻƍ$ǻݏꗁ%wX& ~z%$g߷l6%YӲFEꪯa}/dn$Ճ7UmWo~kYJT2BfpD$ (u`}-s[snf}/z3ɺ -:p`hPE=/ 3,J<ۓ܌$h|JhI=?3t qEl$Oq07Gu@zl0R߂ :afA9ȂoAݍP Nz0gPڴ5OmNۙ}ߤ65U~Y_r o;h"¼NlIm4/r^rs6[\Oŷam}K^ldR5 Bo$Vmvlr}]y{Pt=]~7m/*|$nLФT/-x~A9cM Ņ2zfK4MNg\SzQ-? ͸x/l5Iqu%]EL8ir}:#ii :k]0 A9O֟Ec |btys3?i[\.hU+AFN=yPqN.FYԺ]آ{PnV@c+}u3Fcמ1ڢ,OΆ~N 59jM{MmO8UWHa-}6~gB^YH5)}76MP=b6??bv]6|/ϓͿ}[Caͯ 戧Uy&_! i jz?Hm̒s6_a^_+,ȹx-qūm}qӌqrQC㴦ohn{I%~%Ǻօ!2 +5|5׀wo:=5u/Zcځ2kA6{mOzt >p.Tq (S|[qZ"\OԞSLJ˔oǃYG"QbDmqi19ݝ`')G="!$xnVڳמpmG̈l٨^C(:j^ٺS6BbAc 2!S" ڜ^'4&Fɻ)pBp=QH@,JSCȞgɡK-!Q 0%_zlW +xC͍RYܷq_n N҂ꮫ/*xWq61X+"*aCzO>-`v̅>)wV>Dd2Q"W\;@і<I\8e\ edܔ &%+s#I*.9sn|ɐQiNvD ˻\ҽMZs]g9Ibᓠ#\&I\zn4_U ph<S]DZH1*RJpJL09>v{uеF J=>X(3B9x`Oa];;53 źפlHvy&b m&;`+A=M(_#q}3_KS sVEt (*+d QkE"qp'*=/`(AFّs/UȜ+Esaxi)ȏjG%E'o;-}|*;-lj9?ϋSopUIooW PBA@eK㳫9Î#x>da#7[wg=xPN-B>;kS# jpCs̻~'fD@T; 3kQcǑC-BqAI  꼾aqOqBeW֬Tݮ%mnY|zh>Nazd4Zgv˲eU?ۚrwm?`=^ /YMՒ),3Y1;^޲Bi65a2[+[峫vѭu>Gj{氻O.yv> t ~ߚ{ny6ܣ_{}{mCo8elwmkrn33D^' zđ6i'[qҭ~iR GO3KxыVinUY:]I'QI'E(elBX 8$NI&"Qt R$Q֚!nd0pFkc_O&1Ÿګ6$t "'*$X  Zko`4@aHG%΀d|Is,hea -@)E/.[J5-'aܩsf =? KZ řMQu CPC%c+}tK<fE-*#9TjK}8h 6X[J؄Fe,&~X4X[( BlcW7wp4 @/o-x7'Tv2}_j-4D-KΩupN&ɵ20JU2:g:4HIE-6<>wq&Kpb&4#jJ,|Lh5ta]Lͦa.ZmYjjv'N[f1j37>)&b"ы!e>\Cuۑ,jzc|<*5!/YdKh Bxt=,&~{X;k.l{cKDZQjEM.ҡNG21LAq/&ǭ(pK&THThYtM *\F$AKθ༆(8%I3(\JZ28-_C\g.)YLK]ԃ]2RiqbDLG#a@upC±.v/vkMa{h'0aӻ6g+ n~|GX~d*w|_5$8 i^UyWXxiLID-I$ %Kwk٣ҡݾC4p!EA U+!W&lbN(I2% b}O7# WƁ>PB[z|e-l 9`ӿe&ca`{HL ,| wӬV%vTCwC4JJ1wH-j VTE(Ҩ D:υfB)jcA[ଶq. # MPi,&΁9!Hfg"BH0Z$ET [M©*dNG}vFPHB]a5V "}$V(|L3>s ~}y{qcQ-ta"a.B!8&}cU}~3Wj/NJ|ڳ;bۼ'+@nP4Kz\?Id2=Gul*jeePdҨ:Y0QXqImg~ڗUfpjx˽_zQX!)>ۯw:PTeCܙn]ģ*>>V"^ D{L'e/~;8<<9]>6L~]oׂ!ӭ6:*vS wg˓mrm Rf\xǓ5~0蝽+Szjiw5YoKoIv"zz {#uӎ0nyt'Gy Xy{\ޏ薉y48|G0nğ5Hy`rͶ6FʖԳ8bG>ږnh_yJz]>f+-\}lUf wggk ^# pkfId?iϥfeqpR|kU*[G *);kZoP,Ngŵ\2tscQ碪ɔ\HnUK%pU*7fq!LmI4q3tZُs hX_[ЇjpLҩg .3 RAUvNZ#&$̿26Ü6}@>>7\.bcgG]]@10΁{P\%y:d6Q56b8L}yLC=GLƺf5)t_xOHz+Tx U bZ#D^S1ː>KÑ.MKc4/zOG3}dP֨Vx ;< cm" 2YЩ֏oTG~^#רE $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@z>JO{/^QSnx}>\>P hw߫wGg!a8].ٔ5qgKֹ'/\!p9 }$(Cp+6]+6S+2%gWυ+ubswGRzpVzus9??\=>3\=\>s20k9eċ+_pa7_Oe;Za^?Zl3~]Z-?];4oNeB. -"e0CPMm/s.ֲN+ljV}CVN}_"ּra^G.kNe[4@ƴ8l$ڴNjCV @f7XPm0rQ{^zp|>(+q.[^Ň=~;8{I>}rc» ~\,# +g_,_|~- ~1Cv_X}񋩯O#>%sPy}n1՜z~оwpιpq"3oYf_A7 Dcr "Ǽ(\?_Te[!˫{:Ut#|a>< v}_B/.a^.}е ?^ZvS_+߯ Rb 77NWuCfd6Xj#cY ɫu.$߽M z_HcZqYr:}iu@ڮ#r>ίj?5uLstG!l@d Z{حik8Mgz @lk $,,hr uP}J'}PM=ikNg n&ۦ5lIQkL<˚Oh͇-L揦ϷA^Xl&+?mV52xso67r}nbgڳߺg{z [wU՚]chCF݉+h[Wۻ;:}h2t-+hY͹un-ww>uۣ畖a2ootc9}xn_۞wt<;Y0 yVff]wBnet8^IJF+\ժdU2uAPVkf-YĮgL;{dw]˶ b@Km5ͥV0D6L"с`*䫉DGr}-HTZETj#;\16xv>6,/ $di)0Z%D V<Ձ9 Q=/{#ȹ jm] ӕFBX$\p9%Z*GL'mT2I61$v [[DYu`S|Ys,=Itmxw[^rVK^2řIZh;p+3ԩ:),$,$hH4i~#sȢD9%c9RLCe,, E' * ל;D$/?a~lTn8W.J/Ա,:ɬ҉lRJ^` LbY csQ =3PlrQ35e&lVeL),[b("qǡR[ڪ݊SY} mȚ,gX%BfRۑȓ**To$!' BŒ:M&P0 -JD]X"N"vqSNTf 5A,MGh>Y[QPS$IsAJ% Jb얈jr^Z*Mah:m%E-&%x$y|t`=4dN9 c5_)XJyT:wo]JRq34JcTuTHࠏ]9&9̞=hi>Oڕ݃nt8/%#T2RK0KZq-m3e Ռ3c(@NPyx*˵)1aJhK¶b<.x8WKύ+bKn#%Ж^B= &<Ė+}^Ró x[NqAɩNҎ%&@#:eu)|05u)QHB[wϹL*&&pA'9ZKN1NNFKPT͚0'7F(Aq ©L:숗^+r& ¡AmuQ~dFqR.Pg0m0&G4mb9PStH;7yQyjwitN{,Yc,sq2 7R%p"Xcr'שb&Bi6d;=6Ic~;wM#;mh [A94@F"h5]`ap.Ӵ-+a Pq}RU|k]+7|Uu[WyъvUѫ^e=prvV5öCZ'BZL0S䰺rQÑ*NV;O@X_8}HA֙ );s"D' ߼4ahs AE !/ bJA#e]EX#ŨDKy1*qB[%*Ct9R c,c,Y!lPF**A 5 }N0PDT:W]a[jzkZ/pYzfd{ >:y")hֵiu?w_|,O!2HW3W)J%+c>ĕi^ζ:ЋZgve\\$˹fl)(HL4 "9ee'L X"sb:eiD?cZsOrc.O-.YvPa#g78b'oL:.,L:5A8?ẞI㛗n5C\F.!uN5``Vhl Ä* g6o;Go_h>&%jq0x)] ؀o;DWT&D2HJf3/j~eJ2``GZ!Q W|yqMrƒ$U³X BsV1/?%Hg97PvXcyYxCt.sI2a&)4(9\zc҂ƘLN*E|P^ђНN ho'9OLR}3q rwӫYe%unn2堄dCQB>,U[M / oa~ޕExש7`Si `gN,KŦn65A0J('\\j{ W?_S7ihp%'r@H.ԁh6ϰn𡱋.? ɧTXEƞXs5BiѼ?/~H7oӛGx'P"isjC.:5u,m~$~ia pnL9[fѴ?6&fO~h|w7vm Fdi0lm]q7v7mr,{XaY6# yaX0aVlژFF> *)> ?/^9LLβQ>bmUyF~ZXx:ƹbO/ho<:N5[:Uu6 O >|~|݇̇ûO0Y I x2_m+54В|Wr˸?nwԍ2;6q#ҧ]FDreu+~[W cgk5f))iG9C̃AnѽVb&h^%P^x.wu%o Ze()DybAe2w)S_m[H G"h9 O$P'8Z$*&hX>&FrR/M{VW6.8;LJIEO=Ftqz {:Cc2@^e 箿Hc[Y|߂@y;.8mVa;G a4!3#f] A;x@ h|E{ edӅvFX[xE$ڛLF5Ң' =2=#!iIӄDbo5f>=+hM cyEO}@sLhPEHa\l=^Ցs%ܖ!{%DJq.A*8 O:AvNBm (&xkC j}o0ddyZ|Ҵ{ϺLr)EM$8!hi = ' 1Tk˥ tth_cҶgw3ۖg#-́bh儐\r~~]StA @cH4҂3 / rZ* B"q/k[yL}j:q]A~"n_0*&=N+[RiєE}&pnYGyZkPF Ei2* T3£0BZ{b@ d2]GhT289B)e'Z*1 bgStbI2Iж$!ç4~i;$mtv7J~+eUVB~8>i+w |[A\8ibZ%x?x^]UiKeZ̿rAҏsq|:l|l2s$;*@0 o廀"W?>Ua[LMpҝ :nëC? CpU>'r9iNX?iӔ2p!8)~p+y{CtЎw=Bh/m= zеl?x#A{}%2vEBiov5ӫ?#⿦Ԣ_o2$?n4Yfum;{~n1JLl"ceNK+}jFZs՚ͭ4goWaCI1ysmA7'N,7@(UK#E %VY/xڝg)PH\LjZZq'kDTJ2R@cQQs6$A>ByѪʍ=Slpj"J޽:5h\ҝݞz憣Tb=?}.ɾj>˱M䔑ZHE#RZ/}9'`c-?-Zw0EYSvP@ꠍ8ڡfe&J!$ýR4gҢ6jd j\L("8Ohk Jb,5l6g ApQ#e>}Mrۖڰ t!>v n*n7Z *Ϧ۫GWe.WghE_*/;*Goyu3~;L*rOr:0<:8?͑Y|\C}Ah)яv`a(a#g\(IY~b¹YW,-><62 ~_"$AZߺm8Z]kvm׮+f~ ۯʅA*q<MVX ઼Y,gyEg?T;kYU4! ޛɹ׻Jt9[CM?r\=`b橒Q;Nb JJ}fFr*&ׅ + cOI>iw?[w ߣݼE~7oߍ:,0M䊡.-\$BJ ^>Qi}:GI"{1g[Yu:u "x'qJpN+A{9E.t o @,FT٪nDieuUkL}+jka;|v_j;R8CKmXR[HW2Z!۾ԖQ*/6AkXn|rlE(͇ z.!j(q&r̦VXG\\Zq}jٛZ○Ѹ')}0টImB (*_>NQ88Z}TTRh8O S!;v^Zy*r~) ᥽__!jF;X䪻Y=F1z!zESS3gTnoo^:)1PD g\vyN!(71}=hA~2pz)*xbQ2)WHx=>DAe\ Xθl]q.[FYw^&F*6 Bt2Zz(ҕRLI!BXg*å+trh;]e^"]iKUc]e3wm+DIOW/2 CtKpug+D'gCWrϡDS_j?0ǎ:ѣC+ rbm]Jtuߡ-4+Dw RͺBWVRv(۶Փ5)+,t2\*5t(7trKt2\0]޺(7\֞^]qCL:DW ]!\Bt(IWSa]d~ "=cPM8U߾P6p/S003Z8X0'21WpkdalHE΍bSh3ȥp~ꜜQ}k:{dPxzubF%"Z(2g7(I"]Z@ʀMpw#ZMdMy& D*3t; pP~{%*SDWXleO~DW@뭫 H/ 1Tui'r]eBWRm҈ ]=^,0ǶB+HslIu];܀]ew!+tѲUFuOW/hΉ:DW8t2\BWn=]etQ ]!`-Hg*=I+A( @OW/8`t Spz}]!ZtJPMX֮0p0stQUOWOCWm.$ z`g8Γ0;'uXL:&%J+y_hJmspjm`5CYYu0P"^YVu!UfNuJ}Uel"pEeg.|eb?NqaiNrEJI0 "AC( q!zUU@_U_ί[Zջv6uq]{%ջBs_g˸݅PwUYzlVq?&7lَGET;? QZz QwLr\ZU~;ʆX_{z݌pbh6 SG P`xg9+xteD(e-W] +;ESU|䧤I?B< ǙsgՎ1{B :1B  fG"K~Yn68[ h^q * YOWίY~u9w;y_2)`vqnHF@o^nefSGdGMxz>1,[-%*uFosz`Qyop#6:*$,c\K= Q@YGh&;b|/ 76ބty *R4Uܫ4p˓6\x-N(3@qIw|, $.`Gp 6qZ^cF8B Q~FiD7|ze}s^r|l_q1GsrX\ͱ'㩷|q&9gC|Ï0\k>ܹ}e=a:3̏9#Iွ;`~T׹`8bxH.II_2GEۊik3; &cL|Lz緟>W? ̣E.p|y4lkأũF]O>6rf~V oF/7o!֋a^5#d(a6Ŋ2*^xWQ=**ʎ4j0Pf TZևwGhA>}RG"ʜ &Q hTD h LD^.8XXuaiJ{cs_Va=*u櫙ߋR6ntI7&BQELX( 8LJIEitZә[Q$vC Ve M `,1;; ͜ ذ^ۑq$TjIkC?- 7R8p6vª;)ޏPFv cQIQ;óFWCc_*Wbe.|_+qqjUV ofX Wrοws{530\_7H[=VFYT*`xR:+c*_QU&j@qcG%h\SVv6_"GS~=1c >6bv*s1ASldͅgFCo}^fs[p8%8J08K,Cy8Oa[Jf쉸W}k14 V).AŅ3EYWt3y ھgoKph錍ND)%$bK+-9%lʻ36;<*vD :I)0- 9ʼnh"䔸yGbNB9!njTN|0F䓓:r.jg2s)MrX8ۆu~m1Y+g\_NmpAw~1IwZ*vM aF1Ƒ"" 1:4wyNйt7JM`CPws,%ꝧ/.k\8-@+%r8PH5(Ǥ",'d:*e޳$4tJKfX[ֳֺ_\w*>>s$A0^r+y6'W IDZsb ,)R!8eթ@*puL!N?Kp h⩊LS",_rωd)@A֠Ii'=/~;#D KqK89*r3d'K.DG;qTZWl"}i9\fcd2g})=MVFD?Ƥ C>ŀBi[{[NtgǢ|ȡgZVe 5m3=98D#9fZxA|ʧ|mQtJN=ZT+xrs,њuִUoir6]XwHY)"!l$2dIH&D61sUIe ӛ~n-jz}F~Wt^)I]j?B;D䑟WGw^ሮ(+P^rfHJcꠍi-1LL ɀW)AW<m$LZ{-v4RS*Q=D "8O(`$S܏ok )nqc<"KF{a]!#ߚ3\\u68n `p7ALW*FըpgDdhGT`j _R@Vy^F_7[J0G#Y8r {PY8 뢥^ 

1zJ( $d.R;ٗVFu@7-NZNvϑ632~.J\me{/]wEc 撼 ޹ [ ps/t (t+:)݋k^ȼ>|:n yɹ2J$F (̊hXхɻ|jMZnB`~V\*߅,Q?6- ҀoLs dWqX 7Rg-ךROmZ}o$с>i@3Dfv~oĺ"QvQV!oXWZ?\,2 *⨘c тXzEmwo;&M~n>:G ?v߷l۽&/~w-ǻi0oAaڿ\ Ԇ7xQ$ XFR߹v3yiem{ %j\Iu%x/J ,)GW3Dl-\*ہ%4u/ nB1KIBD\tC`0KrtЂJS Bȍw%3BgzMOgy^OjB<%'m&9T?J,IJpmWT j8i)(,^E3 f^;sK&CWhi`̲ML>Հ9-ͩ\i:)}nC2SEm{vw`t%Ԇ \ڝpO_7)GSTBKE}LXwƂN\a\2}$)vffmmJڞ98n $G_PQ.(ůU<uE{C?K„G[$F*5NcF?Wu⣏Ňx2Ԣգ%z<??4_K?̔txz~7f*ICg:>X[fM@[ɟQû ֯fNK㋵oN?9?i¤jm1-şe֠aЌqK!885q;?rKtEzf)æ&~@?:Ҷ24d^ g GG!h6KBnuE7U$/W$+ 5H=MPI$*gz`&ZͻL?&vn~cN{o/Iښ4NowJr_kMU=acH1y%9m C cjq;%,B@<)PALjZZ$kddiGd:IXPrP*CzXƑ[l.p* Ǘ+ ;U].̏uOVG_huv|;o*&oC"&:U9ac2Fz3_ͺz@#_wKSK":7* R?LC4L(jG%NT؜{(E!&ruPHj`T<j[#;VH`-,Ȕ:J4N OEs4yƱ[½ɓS7͇ AʼV Pycx|vuM I(nGbMhXѝ!(O(B>zui;Ԓ _c\uGh텘,Zݻ7kP E ߖ#BG:{i_As Y,fRvqkM)f[(Nzl}49Jg>vbղcAA%Sq߽{pG|Qvɧo}GY{zc>ҝ1 BkA3ff3y! SضXW|u\]0n_(i>|x ;o\?̦-[]=<~[z7n&ܙ76Ze;u\5u; \˯ߍm-!6026M ЈO;'5?V !!> ,G+ ZuTyf{8ӹHN#pF!,$'و\ #߀l')s.K&apnPv6`"Ygh 10A{p؆DDDE˻цEL7R0Yс(ÈJ(-Gp8HZNѾ@-g@Z^5{wB>H~|nrn8$+N V,lcTj.1YˉWtK<fEQjBH-,N:BZh4N AR-lB*͌frZr(̅0p׬۬[ {:_~1?ʎ/gloiZ!SlwqiK%8 A1ᨚۋ|L0ct6L̮vq,k¬-V"̀#*cxL› QN(hɀ#Cuۑ,ʇIHeӂZ1guyT5!/YdKQxD ꨬ-̇ڨ_g[ˈ0#FܺKFKOG21LAq/=M[Q8M 18Тh4%D*@8 $( X<0e-i8b?͈_>F|yq\꒯X/9ua^/̱ei".**54!Pg"P7^ GRa_K^ )̇8><LEJsܖ7d?^(KByK#JJT ++"@պ rQIPkR6dA>oG3E'eܲ~ADZ),.R;eh)8Ge\ eP3qSv߉r)\ІH Θ ʮE&ڕJg6jM;v6\FiS—ϩxkUE]G;'A]J) t+DR P"OY]p\_WoqK޾ rQJ?Og_Ol49vO3>?*r^?M8 aw>.p\$,EdXzg_-^]5wK1#;K@hqJs#{j./گmx?~,߭N๵TPsê]WN.QsՋZ5V*~fq;Wu&?8j"ic :xcƦl l+lUyT*z6c[;ʖ[&^CK[A!L!| u"\jVe gD2S:jkՃxax\/>n']v/]?su>mڋ_;_&N߿*;,c4HBo+;g'c]ּK\ÇϮ3tgqַ+Kj0(5N'U . UBr@\8scc I& O5y)ib)ÔT%ߋB]zzM y.zυ6Ҳ LƮUПRL^fqH:UOk4M^/sL0G'vLGa)#\1F# %%rc`aE`4&PXD"CWK2ȪlKƼʁ[rq>C,vX| 2H+\UӲ1<,JcY'Tԃ qQYT2IYv9܄SR@ W zW3g}u3sU&s\ sPB5nx`A ]Qw|; )\MFr4Nۙo[ø:ߖcOŖ#)u&r,׹H-j&BXIHRY0@iTٺD:υfB)jcA[ଶt P:i `~,A4R ) !=h5IfD@@ÕSU'eW=9=T+ mSLUՂHJ;OuRs)c@넇ZD3h`N:#vnVT2\NBW]Rс^ ]qE1% UF+D*TlHW(3rt p2Zy<]@ N.xв+W FoB:t`DĕW~G~h6]4|c|9w#*s@ *mpb]vKȝ]\높vuI1$Y?bxjjQc6%}Plu*gk?]I%-G7:dNMMs<3%JHM ` D)5^:E0jQr"5ՄTFi#BW>V2Z ]+Dizt45J{CW}Ȯ|5/gE 3ݟ=7化w>-2!-%Gt7] hEr"te|51h ^^]: RA()X9@WǾzjGt)1 0u2ʮeqt,tŌ&O oN] L_*=wZtQ3 +#ʀ ]e-tQr5 +A#BG W2ZCNWR^$]\ww # W^Qɕ2\.:9Rh ϴ|:E VTZ8AC$?:af#< gZ ɆĻze)ybv3NGBKfXTbi#jLHZXZE OL3Ub9̩Oz\ NzuISlv ,ڸvMPzpFV'2J@f {Œ#!ZfCϱ͖ ZTmKڰ6П^+zh5:])ҕ sm_!Y#\]χ Y U%)qzfDjm#lvUu9bK{^j-(R9+j~1?yR=(yI h:[J4a`lCˤR98h*<+W(W$WMgD*/WNYK عr \Y)"#S3Whbi8\Aw3*WVN>"zgwlz~ʴ|=y v|d\9rtOHITi' 3mzW \q䖃+R;}\ʉ2up%М+6+;yR=NWrQJ?\`ø*W$w5rZ9u\J?'Lj+lP}05l*Bru̫FMo~vZpuB-y'1ۗޞ\K-*UK#kTUpKZˎU6'puo03|I漩 >}\ͻeS7Pa_5sߞW4\T%Rcg+m4AV_-z76UWUz hzT__jjozo*yFh(\E`Z4tEc}o( :CnjEIM­Vr)]TȨo ~n:]v ?V\ݩjngl_jޛlZ->7uY-~Λ.K-=tENB7m6fX0[^tMoǛ˜1."f#tvƎ0܊ֆeFc??]~a'V.7]NYo|Yu?_uyZ_RJ2BmE|b=/oDf%O7]Yy'-$G﹇Ol:m׋uf]{=O<{lWdZ!ŽJMN HWib׋pXUTP9f?N}ԭz;TՐOYM=O6[!p .zw;Հ0Z>iwJ[}}pPOdV}稽\I{-׈;T zз'ǵ^6NzXi[C?m}Y7PA?,Gҷ0h[{$ە|֔ ەwcݢbŏC׃Oi+疑m3۾wHp8gbǐg[`^q h b iU? ]`"?+OS]yN\bٰ_N21fǐ  4?0\O><:]kQg3 vPr݉7%cG.w|rA 'JSTJ9ϔ<™.i+kL)"MWPَg\sr+M11^&VT7qL~m9"$Pk͑J9rN{YҎv$ؗ3vaԊTNWGWW8q\FޔZ3YWTJquhs/ī\\`u\A+rj θz\ U|A`; _ HK||\Jkg\!wBȂpE_!\ W+y1 Ul"ϸ:B\iƭ *[ HXۛSg\%U>{jPّgҾ6ֱ=Dl)yˮ?xaN+6S4עh+'!BMZ=Ty ːJpc(\\_X *禎+͸:B\9FA`!o ּ?J\y)iZrA^+OWP/WcSƽU%8'W<-Z?z+WrաMυR+m1\U)"MWRϸ:F\ a .'\cd)Zһ}=WP錙quOWlE9:Kz&+R9sg\ UE<W$ؖ]\KԚ6R1JKUIɠ1S H(f1g3W)3E'H9LSd)wIr yRJR.Mإ~ٚO8z٩KE|-o}2SXG2g-fEQc5$ؖM-'Zһj]S)']3nzϙ%qbpMWPc՛}'Gӻ6 atb{׋_/. vMPfuww.ޞ۷wFA5lEQe}3E?6We5?[^ן_Նξkßi͟b&TC!P]o ˛j c]h(wW_6E]"b_̿o5 !o~tɎrmo?0u-ꬻ}޽}.;,{}K5W}eOݛrBu% +c}@403r|Ƽz@#~!/Q6#=M6F?ZCA˔-!^wAhsVrAJ]^4OHVe^IãѵqȃTx9KO96wS :waC·^Z_\HymWՕneY僖"Y'Ly`Z*i6O4&aDzK .K|e9(% - UDLR&j˒ 矡W\ BjL[UKSR6, G }b B*n+UNe "Uq&2C9a1W9sNh%Y\u ^WVUu{r\]^^!gKuuL֚qάJ"):s2c0- s)w$$љьÚc:Uu*Dt"ױ מ! 0f_U9)2O-}%Ue78JHXH&UU@ߥOHƹ*f44z!յu$5(ёs&pqouhϼ:ڗays}bY"QK[W UUd ڨl+Sܲ\ʈbȵ՚s,ÕZ;V՞WmtV?CSnHFu d3> .{ ppN?@ϗ RGH-^)*jp)"jT񥂅*v:8Er1+[Y+b&bBT5\wd9Rڦ *^o{&+.XEDc#B+/uZ@!*[0xLs4v(/RlzX$0WK YsT2%`&"Mppi/ qRq?X X@BYaueR_+njUt, ~޹X ykU !+}1v0Ts>#Gp 0(m1zrpȤ`|dl@q+rXV3I_sp4䄀 #VEJ3.@7S+Y LAkV1 x{X)R65:N @NU Tp2]ZJX5=hԅ$h}c2N^+x0WeWyFO"+_U`p < 5 )C  e06P!Htv.[ o"V.g댩`4a <f4~٨S v4bV+$Q|,Rh9!0y@DŽia`;݄*/,&ʿp}N{\pҤ^ocwuM% hnpDL[ kݷ#XaZf I{y. GjE {2|{oh{M3|5h7LJxś)9͡1CK6s m^'%\ Ec˄H $0 DQZ;)oԫbXW66I2O +Ap +"U)ol(VdZة !,J Ei*F|R5z{Գ\Tu`h\ǀG\SE Yq2Bc VXZ#e̍LmH רr'ELA $>(tPMI~],޲FrA6 u5o:k Pٗ6s0@J!~,gf-+!-trċ6@+Xw4"q#F0ߪ1p<߫h5!=GA#Pb.h⌞Tt+/^b< \( 淃9m:fC1IpUlU\| RLYЬD6Y_RpIȰ 侃.H u*?`UH;`M{&R5?.zՍ't7NVnoϦ$ oTԮ&B$S]yXo9qի'FA6NI#Zϯuo><,R4 rv qij`Q?Q.hdZzw-溌t6GIglxzʅ=aC纭pz9Y-i>mwU.FGv5͏8q|2\u:=wm-x('sP/9Cys(*#'k{peoS~'ir2[Hr@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 r/ ;*\כw@@yh&' @"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DNuYX d?N Vq!@@H0''9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN rh@ T@}q!ԡ;*JENct  DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@:'B}x3xjRZw_ީ<[dP̱>#iZ;|Pnثȸ6.}RX-T7tП}VCR[#+%S}ʮ*`{CWW誢=DV 6UubI#{>[*Z+*ʍJy+׳c}妞: 6okxuAy9cv8ici[V\p4[-Bt2=rO ^t8459dj UvR3>ծXhՓu}0 w>˼y9'e֮a0`~WگW鸜.Fk],JtTMJm~g>{LV< o'|fʞaѶnV|>^;T2@՚b'x?{1o*KՃbf&`nt~$\_R8jQg-mUۨ|uM+M7%-Ql^G/N/NjbD@[~T'{eµLʴIFR7Ej%OFZl$s%{#*\"*Z']j$َPAI{DWu ׻**ʍ6%:?{DW bo\CRvc+GtUhw:Z~w2@$7v9aY埜vS4c7>-]V?vPXtev+CtЮZtUzBWrNWUDWGHWZP*';O|7:]UJȈ{zCWS}+ic骢>nGUYo pGWS DWGHWUXՐyvb׌Izev~qT1{=2}9F 덢h/@%gb1Ĝz9MFhYNoY# ԧM};hhsJGMjqciSn;y?*?]wгu_u6gt}xsjXJY>:!kS<it)sb5݅oG?׏*|o.j[ՇlEtz![qt?iBcvp{`_61rfEJq*F$gbNo_r=zHW=ʁ܄5K/e£*tWdp ]Uj'3Gtutc;\Y_誢]UZ]!]y]U{ ]U骢DW9y/j'= S?^u7ꉞJs`tew+KtЮ){DW ]U0}tUQ Atut%*'3\BW@+8t(%#:Bu-HXdW}(":BҌ^]U?tUވAu\:]U]#]i6wwR;f9wڡ #ⁿBgzmH`^[ 6v[7n*f7nh_(SwjG>hN|{>Bz]ƺ824],E[+OӋ/?h*vrNy:g|5yCr?/i1ъ5J1Bhb%?ߋ% r{]WOW_jiFMlq'<0mtnr}{!rΛwVXC~x6zyqYefw5,f\.FtxzuJr)Ŕ1.]@_DE+1 ֛bO+2|d=/ R¬4\NysNXީ,mtYXy ぷ20eN*d#] fO }Es1'ac|IHS/y8P[~4[ kĪv=2+VS皻M/?r66K*pQ3?hpĪ[6I2ST+J԰d>Ż9E,Okz-*ó# }w؏m77*ڧz>n(#\6ʿ{}K1nZ$[NZx夛0u`L0wh F=m{&[(C1jJ:-t-3kJϋ>\0ۖkgh齳[6mս\Mµ>+Gyq;MF\w˳F 1]Pe0֤5>`jk*d0y6>}jn20 X.lo;n]Fw[~GnۮǤ|[}Rӂ-0͍+519fC+tl"<)'Rl k_&3{ᒗJh(Eɍ8gnY4Uvڻd/R>`)[O<ݵDtᩰEEfZ>)\謴pxa-2?"^YГ8Zaz–J1/%Ge)RmˑQe:P=Ƽ~f*jD6m&-xrPqeBj1FK`d&|r3y+u{RWbl4$M?|٭Ƿ-N]Xu++EPue⬌)?@λUۋo^׼&jB4I>Oh;|=y4MBqw=ae+87e, WOD15!5A~-gu[Q*AG ]UZkp担\X \ljn Tlf7 A8r/eN+%'7lps@C;$7bK?Wטo*.olu~(|L˼uP퓏kwkb1eBi}-w.EpG5as6}`q2uLOŽ~~֧0v2s>k6\֎S?2tBnmq{ۥK5~HjnU[gluGOw7Ore҇1w/i\N#'9ui7f:x]f~4r>\]mᎯ׼ΫCޮOs5Wɯw0yt=LM5/~ɷngm|ylЪߧUԪl~~xQO> ~&ܱo_e>Qm];1޵wqq9vHOq1D ߁r\ jtb@Θ|{2 *u& AEppw4Sc.,ޱ1Ć)؇bc9s͵x5 6S5.҈WFXaA`iNh]TcSxFg@hR6SJ5oA~41[AP hoO,VFB{B Iutv(qqLVX*7Ɲ~>@4R?LOD͎tL  <)N+VZqKB|x* .k훎XLE'?3s#2=4ĊTjj,( 4uR=0q{qXe/B?  ~y1gS|q{Z@Bb9(TQml 9Kw'cGǾ1qGܤ`}N!(Zkqt5ԌE3Iq+@W %D0#ƄʍţBm~jB @M2ժLڢZl(`8L/.3N)9LKi_L_&O,&lqE%Ɣl0"eP`āɉmps+x*pv`paӻ6O) n~q :|LLf)̑(LLƗJZ}_o..fZMxR뀃Mnb4IWՂJiL1MхzJsH969dY>֓ݽNtT%ϐMOgA .Tkx[:k͂e.袥PsTL2ҳy=wF~ǰ_^[zz~qV4>D8|oۮa(/./XXٔB,Cқ;eg+5AP~핍 d=zYW[oD)ME(c2+e4дXƖsTiSodBhT ȽYDQl ѸfG%շ^脢SnϞk/iO o%OԿ }}rp 5*n4Vwiw)m#dpƗw.ԧO>! j`#&(34%?%cs玢pR,I⢵)*Ŕ۞2s 9314IҪzzO9Dnε0^s W-j}'=C`OٳNoOi@!M(Ml'ɞg#镶Ыl  TXNsQcde_lciU h8hB dؤ^tl'\YJ*O1:;|HÅ :a g;RxsuZ]Τ;6&|GO ~7kc%bVbdٵ o$ !ѩ|`L6lCw~URe~lI+k=AOr39їq|cuXYלo^_oc[n\EodBݾ& yFj@o%~3.3.1pl18ʍScK Q"Aԑ qXzZRɍjՍZ_+jT݂͆Y?&m[w7k_ئzsRy,, _Y~"O ] *^R҂rˆf"\///*wJjgCC&ÔجC畄T"]O±U=ֹrYmƂg;; Xj%W홹VwȱZ`` *Uj=7YŴ~^UY a$ U$1fB&$SLn gG;[?-MoJ-GҳKW_76L &ӧgiOIQ৏?.ب,""nĪ?*;B LPkԪ0.3 c-L 56dS ͈O6b!9qReto=@xeZec#-!K6 HO QscOX̳lnd'A/NȾƗ}NCTRJ??ܨ 1CMzSڂ˧lM˩X0 :'>|XӢ/\i9dZ Q@''$&I6I&&'ܟqd[.Ʉ{JM3KsHp: ({K$DaS6VYAUDAU\Kxžb|:/cέhO!zٚ$']t^(1FY:Slʥo`4_VTzc!𔷕ޣmkܔMTkci,X{f)j1l/u½Zr)wEOo Ny,)cqA]ׅ9*TP:%s&ne9R\g?/\Y_KvC.&N!X˵z0z3j1 Y4Y`)φ4֐gjI(@>PlPAblrC+U@ ZԔYwٰrP:ﺺVAw%Ce{Dw3=lm(AbL<~Mj޽/O[q˯A$/M˾ѧgʚȑ_avI8#lwtD1GHH_ŢDɢLR% -Cf"h5MtA6΁#nQ q;:1 Glٗ (l;m\U98XubN{e36WEn6'dN9S"X?/!D3m?,װЫWU\Q PXQ)lrY]GIu.]9qG|#FQw[}jKɴmLT->U0ʛeq^Ot1݌qd4N~7Yܳ4[ג-u͈ʹ/mfu6ɴ'AE:ŧ/ˁݴ9>[V\ꚾf׭(|OZjX2PW41?"NUm65j\.*O_޿}o~_뷿¾OS\ CQ|uNh[6--lд]M5]vyMP78\׫ϯ',P/˫ƃ.ZnNMԈ$p]O(.q/эgVѠT\p{yl(Ċwt1AsQM0 חGt$/BGb(2+@ɚ%PD*9Ir 4Fa9Gaj#Y{Бذ6YB^s5͸U2pPzj%G#$ G ǜI~cT w:5B^w)EN#p#,(;6$B)IdON(!%yB[ip:8]#CT ]pPd>X9E:=o(_1a*:)55& &2B̃)Hzd, iwl:`I|''#ڤpȁ* <8S6 5c<u5]2͜gjLnGolA~nB"#fB?CdZ)6eZ\N{4}.å\J1,$"OY 0۹挎A愒׉+T+Z.}l t{yfG=h斓xf˟-_SNQYL(UY$QdrFAPUUrj5$qG5Fդ9pj?w|ޭgeEKs]e|*v܅k |Du%kT)|2 O;5Q5E[VŭKsߑm6;O^ņ}ѻlh4MFs,G'ih4mwuS?'&O5a·7W\#񖮵3oeɝ3;Y`oĴ"-&ju1HRH'0-X"ZQmH[ų.RD-2wb* DfalLfͦ;ܝ=i)!p-ύie'-j v;t(8r  Fe1(998 ],Ti9{S Z V_݌q6K3\eMMC־8-(瓨?]I߾Em )}2ɸ`6)dA'#b2bP1 $@kb8b`)u%8=m&雷YޘlrM݊ tJ# lUqBBZx E[:mb % p?wg)8CI>n;n_?L>Vq9i‰'tVɈR.LƷ]|;.iKw p[HKZhb.2n[:-GsmpY#W75^||"jgQS2lfme8Xx0]ΎsMl r0pèc#m|x~zfq :Ne<O-\Rr1-!owtZ[\1r4FoO(0/l]Wa} mh ?y!ICNnߠ ܣP+s2Zeqk?^Wεkjl=ǚjHVnI\؍*_ nQQ'ֆڛ{<<F_Ji-N_nn|8-m_e=;tz#"{dA9 tk%Tٗx+oXUtO*=Kp"J[N: {U==s=k^MQ3_ٓ{zm5_E/܄:3N)>ʃgm+숭`6z{0ZaKptP sМ)/DmuNaG٭YX1;"LȩTu.rkUYhU EU!fW#:W-=q48'2P2N#lx +'%T<0rH*%1O#Bٝ= =dO5ZUQZp DV2!A9g%! uR -$Evd.êXgrY&ܧ{oM(bQ}b(c`Hxd¥'Z rJu/`3H[UŶzˠIwȍ$X (&I05B$Z$҂$véGHퟮlWZ vtZP:/[ЕjשJc + +"ΔBWV3ҕj]]!`'e1tEp/S+B9HWIWR*dAtEu9 B\:"V!tEAə.b+*5 J+B+B9خ+UO|I o;Kj&'FoV7Edzk z+>џ~t9/F3;bā58rZi+Zַ(khVO~v $Yx4?qb>S{LN0 ~N-ۻ n$NDh0cٗI 2ߏfMr}~6nx[sE_~Nz eVI)}ӕcȷ1ݙYe0˹ #V"M Z\ߥ B)!JF Ь BJCW+BkzJ:@J9(Ҍbۋ{;Ox"z"]9)ɋ[ U\Z#NWұ^ ] 1/3]mWً{;vOtJz - ts\CWWR*w" tut%=Pl.aX)th;m@WHWH%tAtEu9 5" L tutFS24UP ]Z{eP:J)xoEa+#!spޢZ9{ἆT)A@j~yqoKK68t?Civ2x_eDUV*9KL|6=K5Q*(WGʕDOӱ3;WVauvXyPDIy8 sW~T$Y6lE fyYd^Ogc TҨ˦6)q;  loh}%Ag;@M[i*sa늮)Ĥ;]J1"]XIqc-7HWWcGR"(a'hy3o]I14w-yIlY1uj&% !J%+Q~-& Ѻx<]Ji:@ P2P ]Z}+Bi@WHW dAt(]_9S ]=n^] 1a;]m-r;n:C+0eK_!āoAW|]5+\6CW\)tEh eϒ t'nj]j_hA Lvpo$Ӡ%Le&M `7pw^6 D+? C^ exZtM:}0rB,0D)x#+m4)`ixW60D1ҕTN4BƦBWV1;]!Jc+k%U)`IDܤBWVEOWR`+z'X}x*%Jeڝgm05t3]T0Bل OE:tpM!S'Y]}{-BFBWVR;]!JA2]!] bEYALy Q2Jb֔ =T}-BJBWPSP֭LWGDWR Dv,;~ k!Rln 77aKck<=o8 -Rldto][.dU.;[o݅)9L ?)*-7 l*rapi2xPUPjxb6M]]`Ed2tpY2QVFo(U?F2Ҏ86\K+DˣٜHWInҕe "t-%!J&2]}5t%t 0z`0 %+ꥪ3+̉J;2vBd:Bb\ '&CWWd+Dut(e8 `Il2tp. mOW2LW{+!,!¤ \I -~ Q]#]I$DW2 ]!\AR+D%#$1ҕ4:$]*D S)0l' I ('oJeB6\~0M Jγ7qބD tj^;JlEW֨ QZHAEJ[d 2whw]x{B(JVZ$V96Bn=]!J} 0FZKWahUB< %[ +ꥪkj+ld2tp 1u QJ)iM˕ע+Fnth)ձAKWV锂Ay:tp. >`Jh߃m e*th d:ҕ򈳸P3jlH\VVʴc!=3Q0MXn)dK05$>t'+ dړ%pp8*U`خ^.r Ih\N8e~-=ۻqVD',omg4\w&{鞴>s;EηZ f?Xl~pt;AZdTغWMJaey).Ikc&zdh*)яz7?n/Mb>-p\Op|?Dm8Owˮoaܰ.FaQb+W[o/zJ)B%RGw'f%oa!Y.e-лBLgq7EvSsuڿ柯nfgnnyF=Y,t۝J*Ay}+a|J-IszV,WVZO Sq U.TCu+e*nށ8΂q~qp4g|- ۂQ0O`((-h9OR |YiZEweA3X1XЦ6ے`R~Ÿ1ZbM[3jL%+ZmmPp[)!\^MIRw^>7N+ի5A̻чzEYB@7W?x!<$t Ĉm' p^ oMoozR_!oo!C5Z?5қ.?Nfm1ĕw Dvv% \駾="+e& xRTO}1)u()žfOẸ %Y"gj͆Zc5LlZ]E-FoXɄ Poa ^ -rТ!P$!mrw3Ϯytfߑβ}i#}eUe4EwɖT:..\& נfu=艂=e.ܽLh.skjRn.4]RSWVQw+n9%!|MnfWw;.nJ QT:ڰG/ӆPqLY'aamѵ'N30믴 aVVeN[˺7/fLe7ӟ/7n",C`~3_Z1fX47l& g1kTDP0Ե Vi\sw#kn~}ggn}W󏝫j/9XN3YOdy處 #_:e%5`T*X 0Q) x2EԔ<}MT5o_ d^_|w݁צN7W.gwżl9nVjETMSH;]5UTn,$n[q)i򬐄(*rdZv^ra_]],ek/zd2r]OݿP+BhO!48Ԭwzbw-p)l=S൑UDQ*ZGUU%4#5@f8$Bڲ0Zjr~YY̎#_#zRm`w}fXq|╊rfɚ+oXqO؋ Ƹlg|oٺlbP.ibR,eCurYCiY g5+GeiƕyI&X1S[}_f%nC+x=TGq] 4*E$/k[j- "e=%7XeFZ(.)+D-H+Π!hߞAG=.CݕȝT bf ᯽hdl|EwZg.3ӾV..R 8=6"VLo>t}|bU,2$L¥ ^J%\:o&.i)6k9p)D|&.IHVu0gk˜*pIRO3xnY*ȁrvlUn,\~AaT0;Ff;MЛ ReM1 (bGHUbqۇ{+/^^~[w}ɻ;]AZ;Apf0kr70g E5  :ktOOFw1 g=mD2m75oNN/䬨[cGZ^TP}>: .vaY L0^_Ɲ\1ҍTەde f%|̒S ?w赘7+|{_53Ͻzyu eǚMbu<F|&L=͔;Ͷ c$3<{N:'ey>I79N5$=^=q R4HqcFxrp:nUxZXMY B_<`$>.HFKPzI#w f)TvN HSY=KWYYvƮyV&5糆HgvAraT|qϋKNCLЛ R2͗ L;<. bY2s|e7MUy1Gt>4Y'%+Y>c*'}.;vn U liS1="rcrf(-jbEz`PӢ6FrDj7%[Erm咤|Yü*px&h!,|{@_ID*H+F*mQp.-LVB2#ӺV!\xN#~T)jZG[^Vmu{Hw~wn͈V#3FyeRVhTJFʡ2fXB娤z8BptT‡ۘ"􅨷lgk 1bT9Tys9TVܓ>h/V$J U?u9rE)FbwrwwI2AN`T‚䒜Ƿ0 0ge_>3HV-%kNR;TCd`cUJ5*馧3K3Nyk"Ջ;cxxS擸dv+ĨdB daJ0f=id(erb(d{/P4pQ|# ϝY\6ڝ w$D&%D:m#ٛ{ Rʅѓ A´s[C%ֽCVh)3IڕD_eq墋'%,:YtŸ [m^/ˣDӣD0:M`͜x+$ ,uQ(ʲCSĩm ХeŴ. 5/uY !=BM<"4Yd˜ˁE2q1oiQ)IK0pVO? qtgz2Ͽ}|)%āSq;{|KE뙙{[O{`|v=rFҡV݇#ՅSџk_''F`rQ2))Pp`|2"@9e>I? |/aY\[n"Z30 kQ4\Rʀ$$`JiBD4x튈l^j`e.k^SH8iy |D<:/=s/ÈBHTA$Z9crQ"aÒ}욥$\X*9@ a+X`F5R$s&RltY<ک܇s~n'(/DCQC,ʷh|J`%ۥJygx4h""G& }gܳ=qI-}+6{!AÖYO3'ȉ n\`pBRoݧwާ2d0f0}R}甕+ɱ<^[Z V"\NV~3cb#0ٔ{EHq s۬,)9)Zrn~}ȜQ8"cw$zjWY"+Hބ{/ïӝGE TL;=O !7P'dŜs`@.cU2zna)5 ,B`JB)ŷ0 R%%_̒`O=jpj4EN]ABAI /6ӺkԳY#C=J2yI-qE&_}oհƙX`c`ښ D[ˡ~fO$d$e*@Ep 9ɲ F4%-,%A)ts+`uS8,PuhzAr|~ NNVI  <^kEφ5>acZmcV 6Q# eI8΢ї1G)gCo݋"BTn?HZ}O?zIAĶ~Q; Pnۃ#6ۜV5.)L`ݬҪEҪ:OE2{鲢[NF Ed8Ѓ6V/ӄ瘋cow]5I(ʬl ;Z՞'hdg BJGD}K&k5U;ۄHj9-@tX }F첎39u%=SΒqV#J`s VQF/&:r¦keZԴ21QexP06S J!(*,4n XSb&şl`JQ3ʕ,dp*5d5gIyy_Z]fɏudo%!BT~8NRiSOl^ZhTXd=)ҭW,f}edO[5.9!9HҪ}tVat7}:FywgU,,fƥ:i!,߷GdZcVɎk\x#sЈwo\1>}dSnIIYyjɹԅu;mueq(}7,J†ʁ|\9ϴLKopMJj9?7xX9 B5}9nvύs8. %0ƍST *]J7)efg޸17"6`R#7Qxn )ܡ^՚02L[R τꇸAI%>NoHHjnNCO(A;,DoppV~ Am[z߿_\  Jocova_*?`k2UTe"n orϒ7 [y87HLKv2lXr͗b6}e~h@>TlDUY_z`:L̓'|QGB)O*;1#.<1VҒ1 f3Wr`gvg^+$Rםfo5ˋq0' JQayJodWZ@h`QxL9)} e) n >@t!E9@+Q(e&p'2(TԉNl Rev1r2 g:QcHO)u;N~O56KeL^9Q|V\mD_u!7X[kng\AAd#98B q~-ZG;* Oz z NH?C.ݔpOv<Xݚ=SԜ+"x#(xTd9Jpڌ2ex8pRc$.u O Y88SC'ܜP M%%|1蹣H9F'DZTCX s .Xp&[SPj̜gn~N *9x[[{/LhxI*C׋!q<j{#vbb?`I>DXYKu \pH`xh8荅VLFbb9+%7I ʢt+nu9wAzr89R0Ll&TuhdG?`6Ttwt=KqQ,HεÝ%&A+%[ KrntXxX%e/&L@L9?lQ_R.JG4Rn!sS4I9"5{s.SbGٟg~Ǒ=op;v}4DJr8NbL؉lK6e#k0t{X~b}?廠* X JX#mxdd1F)jpOqa8ro^l)˵iQ qM l/F_#>*aC2$V/ Ӌ{i(!ST`{磉RdTYŝ⽷#AQ ]\+ăz%]+לGrco+Tl6U #s=NJ(!R٘џ 2VR xNx SDsQHtB5d]n2VrcC>fKV>[=aZvbd4gU/͵iIe ^}lzBYˉZUNogRvO+HW ZuJwuy.[2]@%f L "I54.Tzy#OWC]y@òR׃sc2Nꬨvv\Eƅ:6.-mc/Ҁ_ WH`!:|qF+~!W}@]J4F1#XGyzzJ.OՏD+H;8*~d*3xZik x{; L#w'/hҕ3c>b|wL:0hT욵VT=k[e{_o*Wy-r#֢:h543JDi Ӥ;kK.(y ՈIF4xij/Es>/@L9PJ#[f=Y: ;ӣ}Bf"bȍelO^57eiAi"-s@i&Gı fbMS qМ[V!5{]~<MtyIZP% JOP$75ByVUQPa2Y.VBM?ny՞9q8cAiu*@D}HPd&Fpm~iƖ^'J"HS4k^Bg%&xaw@3fzZC  IO'E14H^nS:_71f6BX܀f,q:\-'V$Lz2G|J9>`#SBP &uha0SeS+ERJU4&9Q:|Rˇd l0ԢBOy7ykRt()/`+ ^̈́k.7P֏or RsUV-bdF`D2_i0(S$qvrOM=/KNb';Ul]cwO3dPr ~Hҽ6-tU,>LFroY$ęjH[ԕ:K$"ͭ=sam]S-Gi+PTs{ 9Wm_axhP`\AGNmrиJ?8GV|D%_%+Smz^݋jhj^⟕[ U+7Y+erfjNQRv'4vΘ=L.>v,*ERvנjN.,nt`]K :%4W^J^[2$M6 Q7,̩"3DPk?ǭYON/ STZˀ" aI1 p.st(΅pEoS' 8 ]vUZ,"6EݥK7bS :0h>}//ہd)< Exh c[J3ʣƨcvj4Lj;%" OJ=f}kޗ/Ƈaom= ,D)Za P])mԥtGR[ǘ)=mRNI|8 4v/Z u?% ;-/W|簭&X˩b=>VےcMw>9gjf=lF}+|I*y geOE{,qC}yr~+yk+!{|kcAwXoLTpo}6kҺc8X] Bfy?5A9$ib +KEL]TD#rMƚС xD_4e\nG/sr@`Hl݀F E_ВH#f(lh*~a~j]+O1З}3LKDd'e.3M G\l1 1 S <0-m^Yg\ouni@8hGwx 9`w %r O_*$?[I%9OE|-`oo54=6{z@FTlR Jͩz_ %l/ QQ^*vT WӨ<|ˉ.{EÛP-+|bğ*d ?(Նu%wrk/G \eqg4Iv}!DJܺ\_4}gݤ2I~u [E7z| 2*JAvJM)HTCc5unQ _AF{Yɇ^X#=J=0%ZO9Aww_Z`8}r?Y MeU<.NOo|5F:B#>\|& !! Zb0Ig4N{O@De8pq@<##tS-eye;Kmemj9I(nT=Sb>>CCm,Z3sjil g%J(t_,hP l+1`wn)!G!D М[V!5{tJXl}Z0E-°en.J]?Qp!--w^Fz{]̞*T~Sg,eYN鞱h*Nx6 H" {gۖm)*hmۡ;,M*WV;[:2̴׺6bGGCgN]N?2ZSd\ TS礧kA{)#H1ŝ;)ߢD'M!{ )n_!l{hʙ_Z;U?afHJ.gf0Ы>*ࡸ2 Go2!*Esc Ѵcƚ0?̞9ӛ 0o HhB1")CfRZ&,zfk.=(\ tWc.wA9 F{FI0H"F)H:Q Iui'1`3-Y>uןǔ,'昛l#+i'19IRkh\*3كAG/ |߅q9SuϿΊE$3O3ڐrݖmÔ+Ve<)8PBhF,2>&FSpƘ$҂^׮;Z"Sԃ:A^ %5+QV*Lb9VJ)4ƽ:*LeGH>J<Ӵ*SX+o %\5)b&s~b +'8Hs)-c*֯Qk J@ Q+gzjK:AG2A6'ѡYN>;W%# , E3e:TCQm [q X)= BQWC9"tGIc6+UGW%&-'Iи8":#mi_lH.1v"=C Ud(FeMUer/zڦCp;+gyP#A lbSwLᴃd(Ǘ^yW#Sc2e'fѫS)mꯧx=p54iBbyw|lpa4f!Ni}எ8TTY ^Lɥykqlji֢Q蕖WC4R,zWqlшHLAZ08'#B+ IE(aM֕aӼ@ tdWb%4ʫfrz UUCc:xgy_|Wk +xբ}:y_6'%Ɛu2ʫ.ʝ"t!ך^9cI\F;5ݧBhÅWnSiV1{+>΋$Y]=3d]5.{^&Hv Åt ߰b$)Au* 3H2CXRm-(زуB%F YGnjx}7t#g.-+t# OpiS;p -bJ(ŷZB Iՙ?:38TO0mtCSN EphhuV$bOhS_Sq1U .h 4\ˠoKiCgbh ]' =@SC)0Hܶ;1~hs(.N>lnb>knRӨ\l #O?adin'n?) ߬~,c}_sQiVVQSɕT? SPR+5ui93YfsgjX"ӌ\+nq\ 8NTFc57>;x,uPUoB v2YZp%Yrd+j.+}/ni0/7ł(S3?+L KlkHsC@^&T[s ODDce'id 2.S{p㔏?| 8%bkfvmC$%@n}l=wDaX{ŏ~ ˜ XIıVNDY'54YR(iydYbRG{*OL$ bGPDlpB?{WV俊a`Әf,f;v^-[mI7/l>=)$htb)aYr4Ʀ@!! pD8ds\VBN`ǟh%*bt*dBз7{$S/s/z`5QBQ E<\`[SeC/ـw#?~1o0 )A_,N jB8]n-8~vגp8뺩\J$+| $jAH"WWR'ĬƷuJ + ])ѫZt(-Y~R߄_H'ߞ˨cZ~\ͺSyo}D)R]kN{XLQep:Q!CǙL{!lJǓKh?}ؿޝ/߽u>Ҝ6oLG_??wB R'HP :4^ſügG=XHp,g ( EejF!rIBhPNasOY[u˪Y[qjGG)j ]-'w?flX19&˸ܮ)S%\ ԱM_<=Mz.?㊉>o/+0m^5ox–$<ʳ-tvQy~,;/P! Dr腰qs ʸg2 ePUr-M["yQ~;Gi5 !؞m^~;:꼮B"9םwԜQs:jzm5ZE X G0p4v3⨧FcCB{ZBU^?m @\}i@C8ĢNnAΛ=Uȷ G*Up˄K*&,!'6RRvl"+܊ 2Q*)kUhO۾R .OݨQLs3Ą ~#0yd_WL"ƻJi+Iyn]&5/1C͹D "N܃Oi rsx܏|~2ǔw;?Z݋w4~N_~x|7z q]8{N)I_}BgGw:gWdGz~T;=8A>{?{7∓|S'V`=Ou—NaZO"xIY`A:Fqӷ _wDLnzFfMn19烾1G"xT%pRN ,amY`7?o4Jtr?^=]#%xv7$ζũV5`5bY79ՑۇKgw'_kSz1;(}6q&~w0>yL:YIk5;pd5SzLq,d&Rq4aשOl{g GOeH7_M v:[w'MC^R,?)QRz 1;'O0:?|\?9>w'q&+ΧOn_~l<#U)۲op 1|\NO!"O1AIg;(tsivR_H."tAS<ګI~3^[\G{/+R&uUHVcݿh{.*=Ԋ/E) 蠨vvriw&W0!#k{srWל,X{k 8&*yiN{ݗ\JQ!j_70Ywi\W2HPM 00 rC`Nml+IZ 헑P!B>Rf0:])|(\ !u30Wg^njV;bb UF+ LMwX =Ą,v2,7TA0>Q熙neiP!= 9r(Bj@2M&?-,R. vމF *3@^^? P8Tl Y0Hpײi1hP$! 0A\P:)XR 3 L*%^JTk%TݑG?OTe:z3+1~d}%! `mzH [%;hl}$s 3 q<50.sZ9D-^!) oPhuu܍ʕc%00^`4nbHB&O.׋1H3CȱyK>gP/oto6qG"~fub9 Iupf'h5藓 dV'Iot2Àq/NEF8(DjY`XRr wwz9J*XS`O hn@Du]BvL) x <(+÷oꗚ]?|J UF З 29ZNw1:2 Bj y޿1LR(bDvs5q-vAz5fqwz2kKNS@;~}8=yq GVFh.u+58vc0QŴkd%Қdߋ=;ٕK]uޠ7Eϱ.H+ RtJ*Rgp\1- O&:# TAAOoܛk\{UHEKH{U/fI~KKVb~լ"Q.06pMV?.UeSz=-!&RCmVX6`OfmԥMZZKy`қݧK%+9lROۦvQx~w}I)A&5"TsXTTLNXc:tȪcEVYS\S#yj:OPy#Ts@1 Ah2fӂyc8OtPm\¹%bSDWQ7l8.?j$`e07Jҋq&7_13nW:y"i P1Q;"!*NiJqf1FLJW o_.Hi15k'6t %,_o溿Ž6/`B ~~F)g`|{5SP -X8QV'-*N]zyr>a{ku N^_?8Ot<~ׁ` =V + *"Yn9i7q`L HvuZBUuZta s>΢_W]՞_-aU8QGrc;͓t<<9O7O'1KϹ2hkcRU`k(8)([@HĻ+/68wrj }kVR `8P!1sXnӸ ,zy&$ba1bEhtxؠ]2D%!RNtͅ#JzFceu"KbQ'Qig;IHIWhT@:зC(`XFwZq {5# ԉ\ Soq\2F]*bNEjQ:-|58ۿ 'j}:\4_1npt7A6zB8V rQUos$4b}ͯgEW| ;`ufXyRb qsU n,ChW| D#I@-Ҙw"%4BWoԛ$ٟzq6,+'6NFYSxy&Cs7 \oXl#FvHφW$ޛ/q; ouQ$^qFNfBNIk/[?MdqT%p?J 4kӄL6 { oz*qO0g~ȑ".mއ<&b [v$9d1ej눻݇G-VE _1囿N/n0H`|9xTM[.f{Ukm+%_εyOO0U.N۟tB)Ya:I60j9N2ɒR,2Yd$J[ŤC;hKKcqo/Je5W^=gOSmZJT-IVx-޴l` EVG `"E2@[ֶ{[КW i+nDjrz}HBX N?ȭPt, u7pozHbObq* ɲ腠 6ܪT`U 詩bנUdяvj'%z?rRѾ`xlmԐ v!ί/,h$7jI%AFmʘ0Wֻq(aI$%?IE4umGEPSbiomO;QJkT`j4ow\ży.g]~_Plh&qV)Ϧ޵;h7;t?>:cT7Mnu,x-a wvQNtX}Ѷ j<I©yy]2nm\zn&Itv5OջTֺsEd/ż¤hj]7o2[7nM:~]צ܊צVhfk[].a\s-E6HMy$Α ik9,})$dc~ax(]|ŮJ&;50yO4IK-*8[!]bT2/8wS?FwFAnCF;)H>/Ø0Tuf$Z\p%̟MSZ/&Ay{;h|N{;wv۷N&8nN܍Kܥ(98_`U8Oq2oϺ1g"aEā皟܍zwp|@L4:h8'8@v~ ?U'']V:mQQy d%DFe!~!?pW/=gJNiL|S1y y!`YiaMt5hzu3w10'D1&DHB哝,v*5#} 4`&X:OWjFxŒ%;b&kHž02B1"hl% <%MρWJ P 6[*/I*9E};`fW@;h< DYN(-T!(p>5How>􎚐ma tm-&]+<rEGKS_9-!6 (A[9)Rsgt.p23#?'pvvmOnWldoq!`jsp6_ώ>DC},9gGg^Ͼl^zq~ t?dq Ds绣UM+VrIV%8^L΁XXRe-|7,JvyG_Oq䛃LJ )Cw`RF?#tom PPaա%sVoqVR/iZ3aE-s*)K7ȥ s똰DTd$p47_aaF Hb#V[ii}~|OO`S-?R*UA}*O_S@V}OZVo9&^#~ʅes{VL#b..~aQx?lFi<l4 y DK \#IP/6hI*B>Ls`YpI$9B+0؟`5V,PQBO x$/Qǻqh?[1֐ac%z8)xR0B{Lw.:KXt`[p >ꎷ) Qǰ~ 1$s!@8Ĭnԥ ܁ݪ() N"RGVir. Rv,%tY %{gC Lc:&,,6(}LhHن`D*,7/L n_?>spd vw@@3! 9,@ZժAc8H^^MAB0F%sc<&!]DL}χ E6G)Khqe_rq~U&5?p hp"%F(t|YFmtXEWfG#gD6]FF*9i,z[ ?BqsږE5Zj7(.2"\FК:WL]~Pu*빇(7,ӏ]"ņ!9o#JC%%Rdy=0v3Ndۙڎnw;]Nb`$vuuDQ?-C0[)-: m 1q(I^VRV+B(ӱ$gdƷ!kJ1ժ*P(6(')L:I p@B-o&`Wa!u6PͪW8jkR<EZU[Z*j0eŨ[d@KWRtbWC@u>B [Hc NB.Z>pǀI79ZcTh6tgHA]eoHZ˝\Rr*%TE6n ͤʔuE օ |)'+ΪS9@EIygT5oYp [Fn Q- Z +i5VP-q\$u}ID { rۃ{1ríE`tE% Yzcah4F܁]Qgw 6׏qAprj%[0~VȢa`y}Fu0*(LuQRl1^a79<\Zު7}16ۑ){ڥ2uj,x;ykL c?hASrUJ8tZh17q 0>\AC) ߌR /"3 5@P`D-!}>}4L`۩ǎj-G$Y@Jnj0RJB`m9z [PY(I S^P6Dh͢V6 0頓|Ub}){#lӨxg|]dsQuC7q/*]V*BCW׈3biM6G,[ZpK1Uw8TO82l\3])4ԥLkϱcUJ;v ޑ0Co}dcWoocM˗}W6|:"> / N"Ktd8GS}0\ϠTqng9sxZ&aS.ºK<|z@ҷߎ.镞w3hix鄄nhv㳒 T/P}V14jnɹ}Dk\j,''!6 3f]l,唹!9vOh;38Gs>_]F5rCr9q?ZHnٍw@8UWY{O0wO@'AgCų͹HZȧy$Tz> y?w,ĺ*c*z~]y߹ҝ⎋&wgϺ\ï)7e^\tSU-G>(]N~ziuf/ ?R\>pKG^޵e1}V+?kEGrާr}'1]8{ƮQT165zS3|n5{670? (W']Z'!> >Eχ ͘r̸u@;Âۖ ҮjT0Թ)ɓ,38}8tZwG^ҕ6$ EJޝ0}^u7n1Okӗsք) ]K;b f!R`G& | Nvă#} ˌ¿ރJ( JGέAጠ#ЌgN9aZ?:s{+ ڱNAvAnN-;#yeFvץ(Up'15fFv3['z>^$Tא'FgW^(My^QctS7.:ӫ=yx'v-Gs7zw+BCMeM&K/Y5r3+ч| WW1R zYeP޸\*6c^MvT1*Dhm9a2|90 Va̝+]b'&s0$|GxZ+QSzeuR[E% Z`MPr"K%F_Z1A[jgƺDYYRy<"Ĺ G߮P82$R*^jnAHƢr&q; :+tg j:jZigEe"ښn&w) 8_kibR1U-Ԝ-bHŇU OM}9[:e?%w'}<׋M!}_=}6ʉZ_vyE%`kby'ȖԾk N ;1 9~[E#R}ȶ\}2 @ --%d˸7w3r*ݞӔMj)4@#6͔`0gTiƬ@qAPxw+~Pc{) ,.!:gӗjq?fo@R/Yv%:5s{#tr٢yɊh1Co1r1jdjÉ j{:dϗ(&%L`|çM1:x5צ[Ё:B ٯF˫:t\HY #{m,+z+e?~wT?NCT?NrQ:{O٣%`XsuXu}7a 00 oWAzTU^|i$tpAnD[Nl`o]so2Z_M/ dֈ늒F3wf|>$grOpωR3;3fg-#wBׯ9OQ܋ew/U>FR랅̿OK #,p%vT*:" z(3hO2weqt~YCv)kM4QWMLTO(H6jkbfG߿ލF1_\nUV ͋ϼ5NE?]ر\n_~TNK<Fie5%7;&$7. +gbt:fij3L\o"'ۋzbLW/ ɜf "~L11͋D;xn<O,ϼYK3Lbc=qz0>R6߽u6O{.0VV1F?hw٪%l+"C8T%8AԶe)֫ڟģco)A IXBIoe?Nk]$ldn<2#'NDTc n)0`IWpã߉#ӶcّiY `Aٓyd.7؅AXQqG羹l' i l08o)u,h2{i G C3'i92bq+r7j$S"mr9`ý_C>v^oAvc^l:&Ibl<-%ƸؚQ/3jv?;i"e1*Pu Gf隒_i#HXN`sYul ~Radݵ?W; (x=`u98hZ^$I5HEȒZDNCkG khjB >hDJaݬ%Xa6Cl`uWe̶aiZSA\pdmHXpIM7ݬn"\g\/Zj\F*;~Yelb6{{KC-ÅY=L,{PglQu3djK 9R( !e&;F\ϑJr|,pdN.+sB}֜ĭOi.%lUU聞sئ&%-٭L%vGڂ/VM'ĺi4{Ҍ]{`7s+lP=bC#Hu 5%1FF?51s6gk R1zQ ]!Vy(vL+٪gb_wu7w5}YAkGv7'Cy]/[/~y~lxR~]~~sqǚ0'vȟfGCl@&Ctl&`QYYW19\^ XDYV:)JŧQi04>64SOgDj FQud"Y@.u #EW S+@4nV36){gYްB:*`m|7 1qPQ| -`ZVEFt?iLG %_[|X7DV!׃|;PwDU__}"l$= >D`]+ۃdծ٧퉷4WBK^JTcGx&kţs`*} mFh 2s #GmAS5$3Wr5`vrD1QJX{=.jutGsq8>. 9F97jX6CLMB,ݶ,Ʀ[^Aiǣ02s'hNfa]'q| Nf]ҭO{fe95Nj\&,.g .pD͌ tp(G Cv/gB#o~ ^ϚvCL'hqL dM-xIPAmR -zS\ʩ7vHX塐`Wj*))zdݒ]u1JwB0̮Pg8"vHr\wafstHJpMP+5ߦiU}Ljeх6]o_^ωhw9fN^-\ jڀP `63nϹ0J6~ NL=qXIYgKY_IԲk$3iF̓i@ACLTz*ʆ$/hie/>r.q15#&GrcCnES͚dD/6Dzizvtdz04ǐUM:=={u+`e,]ɖ=oN@~o=lnNq&o!qoΊ@' io>bE/^],P aԪ6N}*N~80YPChJe#'ٔI"Z ΒJ{\'83ɆɐuMT5l萌VtuA #&QNOldꄠ^wZ2lcAg(a2 CY &,ɧ]{e&{BJg&4Bi={/eQB'[ظc6޿ŋl >4f5F@qP@~{Z*u.ްxѡf5QCdf ٓV!!/50l;p͹&mI5,$&\\cY͸ڧ fxdۀ]aR]4 5uU$ޘ0VKJ\J flĆBdCid}PB tiszYD+ҏT_(ϠssEo<En?ֈndap|}I3ڨ2pݐq@dFD(͂dƚZ~z;иL'.vL 6R͒BhFҒR4 Cp*$DY;!:ᘤښ{^;ۤ\BeE)GWVJcRh$q:dc V:lch֘Q==?Nt"8a Omx|{G `ِ]^L~1` K: X!ӗUoKQo8P`UsOA\ ┭Sz}ў`tԳ>j U] Y3z1Gdw`W M%B`|be~Tt)}}M܋?_xL'5GozO1DuΝӹ!GSG;[\q Sh0fO\v'#kppOo_aTf>_|"CFڪ\Цc Rz0|)]!"-?cu^n'/vM|5TGȚI y8raNQ Qrɖ6e!րC9 }eրPx٠jk2.plE_Q)WA~ظ5\Lq.$ǽGɬ}~߼5YeDr62_h/m˺/}kӁܸxnvwլ5Y'^pW߀oҐi(GR<`||ryi伿%؏v&YO]:۹wWnȼnMO;||sg)v_ozc|X)yayoنcTEӕm8qV,w8C2g-[T3ܟg]LFD neA$wG绣%6o#w<,ܢe.j&Qn<?JD^$"$%œ ?00IXἋsM^"5κ|@ 7A Yx4graN  YM߮VO1L͘gOQZpbM0-%?{Wȍ˟2 |Hne.LmR)69%dlFǶ̩d^lAH<`EQGED`<~H=X[z'JV @1}n1Owں&ovG޶ ӆu{|!Vz7 rNcٖl_ k =|-[Y<[&#lLP%_k _?"?0Lͳ_?1}Hڸg?i6*zwٯM`P9=4k87L ;;eӪ\- &{7fr#DYyi>54d69p! ֕gZ Rw` R=;|XXϾ.yE_f,:-%>E<XĢkL)@;^muvTY\6shn @*e Kv$@;e&APDvڭ4}]3v@fY*e^[X` +ЮǾ%t(J.oa]c0&PuiwxJf#{IKLR\YSIН>Wէ$aIH/ggSNw 䝆9e鴍JgRJm 몐l"V"İ6vfT%P=tQKR% YpJceB80bXCMxCb h}AZ muBcs&JP #q_flg쳁L6qm|)0h9֊}ilj-lG uGun[eN.U]ŏmo55 ïꝹE[6G((sPZDҔZE!Gmm5M7dh c`m;J!ܒd}ZvImN;`0Ӷ>FlO!|ƶRwuU`V?ˣYmŶ Py`A:x?j>R֖>iNY{66Ju׵F`hli'o/oaBTjr5]5 lso-$i&MBe uf6-eLF!g !'+@vKfQ3[](zXuGIdgߦZۛZ=n%>m{񴤴0XH<~:$\Aglz"eo$06Jv=zk`{.ӣҘe!eLaܪ~A+š1{SLp RkȠ5\~fhQ$'KOE5-vY#3` 5O2 j :4j>?~]}u@Z8bKHVWW5rzI&`dkbSMunXٙHۄ-f%Ї'J}49VHi=fh#nMͶںutMd5א6UxVR7:V4DЦ, Y^ )On*gC0~&C{F;F/;lwl[儺.Umb -F()&Rk-d&luBtЄWVE@TVR ctEte|G[P'77E֠}W1IAb&]& oDFo1Q#6O$gk t]!8+j8kZ-r:6go{N _h}awl?+/™=uvц&U&!#넶D%DXRw<ۯ JCG_L@޿6=2&۸h2[Ҷ(ݐQ&< z^ˤe}zlP&dO~+AFnr 4 *V;A09 #.ρՅru(Wnvi'‘7(WՅ>3KWTt)@)^=EG[;;A<3(&AdO6`XU*7}{Cƭ'ԭ{hKldsA):d+xRͭn *~]xRGNH\ v58'ƞGzc*J!L'H +^̮ru]p! n߾ٌLBE΢Vl|=:삲{웷F9WP GټkB3&g]3xe ^csA "#<f[>x3roo~?1'cQEV?rKsca]/xߞ6ˎ>^UW eS~sJR?}9zpaϙ<XӇیn8WHRszt&ԶP#k"p덋ax1n=[.XY£grNM }u$#WUGfC&G2<22 xMOKm܄z*^GLm nqqzŖ~X6ҷ\)oC}V fx[Oh}cFdjj(UǘCcTBkZԨh=bˊlm"K%UHВ1jB4P.bu!ujpWgOezbZ/=.j[C`xCȠ=Z/z rN I;:Rߩ?njhKFUmW&,o*YoQۻm=)QwrElnwxAl4@# {|rŎV~r+מ(W)|1-T@kJ|:[}&^jIII|JӇ6%x&JVʕh⡙,+f.x&V J4¾-]A_:e ڱYn|}6/{mk +n3P 3S@xr(@.Ю@Rڽth)&h6[iM - [c,u^Mo20I&7l EwǮomD^ONgڃ$:~ãI}4?9_#;/hY~n=aA,;QfrA:hPقf94,9y׃fDFO6X@?9Wf}lI_Ixzc_zϴ@1xo;@%{^7׃/b}rzri ';L&&ѹP Y*`k^&63qL,KJx4|G:(fyZD6)1 T(^UmeY.9yO@zB!NvPqbf0= 1q"N^LMTF: cClЍ'4uv5u"SZR}SA\E?>(*J< ZJFE8Xd}$X Vqj܁NIس_j~c~G8YW>hB P=pǣxqyбo_Ac0a?mozx-Hة- fgu] cP DMwj"R6/B/cs ۃ()ύcw'2Pv$2FbWV`8Z^j-<.ބIR<~76 n4nwW?z q`dC_1JljS!|ww[!ww{/Ko)ZY{0kwܡbi8x^Y#cCj1Ʀ͂1flƱ'yo-b%X˛}ߏ.X[BiA9J&(As9IhxhK͂14Q !$HXɘ+`:6& O&Jxtgb6 m5k@5>i L]I@f_һ׃H,Pf'ux갡\1;+m$O~ЭKwIrÚć$3T $1Ҙ -6V0ޔJ9Ҁ IE$u\m `)yrXo8gX>"pu &DBt7傮c jJzTSirعJ)mn׫ Ei*ETA'uĀRo=3|[ӉEپ0z,w&RjZ=ISĦ'9 )wb|a .(mI2yKcnR{5H4Um(/sQ+E|+BZSuf 7?r)a/r)a]JfAFT0sC ~eBx34pI'$ܖss@U\;)1| H;wG;@VwcnO/rnO/ڹ=E4a zx ΢4BDLԘ~t:rgcP#kڑ(b{7(ޟQe^V*{m^Hٷ QOh@)xR/4ƣJkWfe˓~|vwʺNYUىT+TU!9,F-ɭ=R1]sﶗ 5u(*ɚGmyEZJ2gJC@k_xTuȹ9 LE[.$`U885&O܊6ЊtÕoH%ze߿i%ϽV:МcQmǜL|hv?EG55{޾g; 0gXƫާ(ׅ̈́b 2m&—KQNsn-1B˗4>uhŻ9O7] 4H643]I^z>ǒQ-]hic79:װ-\K/`iwDн+W͜ⶃތ ˭"T-A&;x qp7g7a<`◾~wuкsm;ulm8J# ÷u<鈌ڻ U?V4[pJh+䆻l·\FM=6FUJU]i*mhJW-EI /Q [V_!\) '%nrKOi#( eOʂ\vӫƔk%rNiJwu;]r`7}N@e၀B{ Pz[ēꆌ?TiǞP@`nޓ{/آöaF Lmpj'>˓IɦH&h\.]95d|3F+9X0Vቹ! C?{90zqA9ꜜS^ Yٽ;}f/xDϣ섟uTѴu~r*6I4u% 9).o+SS[7%| BimFS]hu̮bj$g\!bh7jm[PL@ZPm,Tcc2iz f2pﷻ} \em{;ƭHp#/ rOgy)*J>J%m~7;2iE~@ [*ĆrO" DΫLLZxNrK, sR"WF%D j3,ZN']nYu51"ևHW!X)8Xr>0#NA+8iWCEE^%^ffef*ӌ$  x0\j={QC`k4f"-ۋĬ"J9H^Մ5$6r6Y[%^ϵd(&sbbAHq 1E2/_\He{TĨ p׭m e?</C&,w&ʠT lEL<ᜎD90CRDHsCBn*y!!bc OJ5 b-cXZ"N?$auKbTQ%뇍l$Cl 8o B>QHr|>5WH-̼ܧ=N>qiO{\i*&0 ]G")!cx 8QE""*BV*M_hZX+J*nS q>Fq»)#8 <$?0 VQނD,RWZP$t8jXabX^V!4WMd띮A!XvTl1iSJ[U#Bj=j4f .AΉ2ij%x (ڣ1r_1SmxI+UJvInwl{yU)nrXU\(RLۣf$QSͺ'}0'}> tڧvqF0+|`n1ƃ,)̷^QIhrmr*[A>`֠,^l[ߍ+Sf ݊^vmm~ ߾vM;X^@Vvw w6yVWY*|_y15Ϛ+VNxWnU[`eiD&h]ݮciЙ n}qG=,K8ޘ& $o׵ hߘF ƕ:I,"XRGHSoK%_,.N}zMrӑBȭvG@ddw@p*)+گ6j<.3$kglNp0Zz-t*")(tA$pdO;]fx\kwW, I9 PwK0D+w4HQd]ZS9EE6FjI+(8ZPk#)%_R$=b{nA[)flx xՕIӀ \H U}88vy24̟?oƱYXbuR>M|R6Y”IoI eo &P)WC7qy.iՇ`~B _: *-es:6n:/2t߹)24ᢂaw'ΟټQ;qa=5.ٿ}d~=~ >|;o:~^Nyp4g3C0Igmiw@ǫn;]^{?-6xyt PL7[ |W;& 'y%ޝx82' գ#-s-++ZVv\Nڳ^$Slt|Ҩof|ifR&&W;m9̽"ԫy}M۠wty-K77^םw9>Bu~Y;;/ Uwp(cRPu:x6v~)Tpv8v~˹1G(z ,ugl񪣉3 ʟt5GPJóIM@4(3lF|)pd&`f| % oX׳}}?|5<jz&EԚ|mcYb&ݿ,)Bc_)ŽZ[$ )!7!K!F+ da3.į>4vZj'a|9l`. !$7} ~{y9)-y~=__ *믤t{s'~ޯW$%~=8@2L d2aAчHux~}#,iiBsח|˄gKV0BՠjK}ɽ/y_싰/%d-_/m_R,(i2ϬdQ̗$ dE_2Rݜ R7K%*#i`{~:JGF99JB)ukC)} 6Mx"x%Vf+أ VlajLҵYqB<]^|fwif~sݫ=\C)#rb~ҧ{ {!O܆f%ևATt$f_ NWML/gb g= m1IZ?V-XB#Zy]s$7<tɅ%6B3 6;! A0X^Ϣ҂I`!HA$R u%(/ڨ0 GP];/hBEq|LAȦR3KbA1{}4FAm'<'{g@#&&zw^[- C/B`QyM0$HSPo m!7[*sZT t^g.F%XJu5Gm‰HqI`̡Ziަf lI!,ŧI(}v3,77$Iey)&R:Kz @Ĉ0SufpڦLr ҄jH8cr/"YDQ* bT*T[nz&t7z&R*Dr"[I$~Rr'#ܽ܁N)A(8 HZ5f LE(#w1n-]lC MN zX#eً_S)(z[.Ue[贼Th[^Nkά̒.Cܚ%\5 ] #m _ftx´e& _d.[`@jtg.1nZzq<y@9% rZZG]q4&y%}ڀFJ%!͇syZ;FNRn}6 .m'KIGc1h4q?Wlˌ*S̙OL]m5Wb-u%X0|lQ*ٴFkʈ0}C!4q.E+;oTwʓ7]mϕ WnTK2g6VQikW%b YU׳mC_ zW;~@J%O+I{՛+ZM«by8qGZrXf3F.Ҡ&(8mV]D\(DUukNnVR51wڼ7'sOWc:[ aIwmٛ {: [/ pqA[ׅ|4N-.cݮ6EO?V9ݟlj_0 $܁'ܘ{ǛXX`9|dVaEI{ާuer !m /JZ5m/EnT MS=>ց :^.˳jۙ\ю+ St_cs (ߜ3(jhMu ɀDh jŗs^2&gmqB2҂y 8BR)% ݀k1|o9X#p9t^?I Q,<.Yt;n>=`͍vQ~g灏7VGYm=7Yh2[2S *\vVLQw 5ϩ Z2gd ޠ0AG2<0%!D.rzQA}~O#^WAB0hoH=c|ɂipƐ/ܓI_Fz1^[z$ Zܺ`4~Yx90c֐K1Gw ,'o3O#%5 g^Ol^mk^C: <;?֌`3[ kPjH0/YeĎWD2{)ԺO)Qqm"7|l/PfsW T{{x/ˇȴ'w%T]6zIa~5&r/~Vכ߽7jxˡ~_J1|ZZ-{EEv2պX?:Qh%aF XRC''@o~%}Lh5Q7 H($&Q .YSD,ߙ;A4mzX`Ωē{Z"]rrIF 6* &Me>xnDQAj>boov,ގ}DaL(Ȋ 1E`;@ ikb6:OYSK#K#b(օru# l 8rFrNt4tuﻕQyP\Ofݼ01EP޳C$(jȼTmz/|M%RBe2{J@]wt2"|iR|- >ǁkΌ1yG+("k ڦML̂#1qfe|6Kn0S٪=]&Yry{m;J0e7Yr'b7Ces`U7MűJG҃g9N\=Ӿv,r7hT(܁d_e8@;D6FkY?2| EJ-;n˧]w'*<>~xp),0p (PnAb󑹀?OG<(ݾ=7GR[ TKU^" rE P=WPD<1(F:?/%A%5<<%Ҏ9+Kmy7_t| κJh}Vh(!6w>@_;Ӻwa1 BTI$k5m;ZH:s<ߘ>nҞkRT fQ7Y?pz!zM̂P'>&Ҕ9+a(%AHOʨT<5ùJ5Rk%HV%F 5oȞe6ՙNm[89c.A 2B:@S+*O)!R n bFeìC>28pĉ 1X(ـm!zddgaM̃֜;@RBl Q ҷdA(Z 6x>@1D>HX+1kLaWY(n]zRsk{*_:^` qQ!AJB>SuZVHs"qwvWB1ar:@-4J>dIcNKhУǘ]:?^Nv ^$:ҧ`#[GT&+K%l",~F |&xs"av`+RjE0C["HXF=]0D*ȉD!"L`r5#r;=R#2"wCEa $U s!g 6q-`ˆyerhEؘk&}#L BS>/nG*8? p~hc-1"0|Y&ML5yϚBS3Ɯ!T̩@( B2(v]'+(f;S͑an-91J;.QɎ;™CH7.52=©uk_xwAVAԎQb.Lpbn/hj11JES5V=\6|"_k#^?Ε]yzB7p<"CqqXG !jʷxIC%AGc4)04U)jHW@.X}}05(xY\_%xk QvV9/j[=~znUfv'J5?/WM>úYCBx&/Gáf)OKyHW5X<j=a mxMm6_<^CY}0aƄm1(hZt _c͙ˍOr7 ;kH6}Z{C͙hMɃ{Gs:;gƜ5Wn$7+3:k\ tl>n 5kp2rB~Y=/wP79_ٻ%afci>; l' >lG#D#߷AIEyEfbx<UUWuWW' [gciZ봮,56Vcre-%ImƬL97»ҋ:PjʐG:wc #kX(挪vz۩m1X{2¹Zy$dk'VnZN?ʿsg/ږk&u[[)ŵ|$U7*VsPŽm\F- :)JSyuuRJ$aLuuS n4Sl#ZlsFNQ@D7/^8Bfˀ1/Oj}n/MؙpzOwi=qFۿ0'{?s?\f8AhՋ'kj<%Z#"L ` Lp;Xl܁r D5s`lH'Pta!at )FH4LB$@RIO'H+$ IE"1śbN݆\NB$RHPL1!*0fFACNY\2˷8ǰp:)1nSW*t!€L* X6-gRH2.t+$S--Φ4%U:\LhbmNhIf -Hi $Uj3sI5^ ٤"dfq?"C9AfLBvVdfߙF)LM6L i}HJZ5ml.$E.d#G{+0׽=)Y *ԯslV6w&Mua_6|p۳gT(P<0v9qZfFc($e[:unr."ikĞ%+u"1Ie<9%7h\Ԏ?{9)ZS~,p6QΧce]k1Q_ 0 2PJ #C@?P>c#S#;tZ3B[ǥtu=GW/i(G K*TC+UX{! 'lXqvxgZ4RX03(^'qI[T^']B^p RG5е0\Z*yleUtt~ա{dy+\vՔu)h_g4% g9A~윂bt}ۓeK]փ¤9:ԤAڻբGP#h=᳅ֽ <"H:`FE]S;6*G ݜIR'kV>P4A;}2MReM3S>MtFNtIE ^ff-V ZW1qQ\ ,wX )߰UBF2YԵAoTaքb7_5pzmk(j[գS7jcxbHcxwUFnhO֪ 5U~P"X IU~Z(ET_MmX)$XK# n{4nE6[t3n0dE1 "!|#DJul0P Q!B!Řb c9.P<ԘA{m 5T`,q Bï bFcPB+}#L n e$*gd`o}$uWMD 2WDFEZ^# fo> ?~KkC rik^Jfw}WSTb-7+Ckw? ivcS ޛxhXa5'|AC.g/ߎ&c{kLz;Yt %vw{O_‡HؚXソ _~斏9SNÜBT|f6D :,E'?|8oh)ծ}UNL)(_ x57t7ߙ%]9D?SRG㏗y{p_qgG<Gq@[&U4ý &oٛvyHȳtw&YfÀ#=AL_82h4RŌD c& u&}z?j Hi,L-`5IP @$QٿN!dW`D (gtx8f`ǍG)8/ p7q('Ԁ3 4 *opĨ9g"R@PGTo~myf*EogyG㱍5L4fmkQS0%Վ¯w0^3_;c%W%Mk=p{X2 Z(LbbjQxH$lkw>?F; cыDo/|*(;C[xS+Q@ RAm8a9QqD"E@hkZ*P^7?؃LŋJMY,͖ns{Jn{#~~8xS[5GTZfy9)N#ztcvQ6j!&Gs!>"Y/ 4Lܸ-Us^b Gȏ)1!9,C2s\RB0|Vaߧ $fk`4YA #A)ZE$, 󟏹T3Ed\z3B~*>µnG\1yja"iA#)F\KqҒ(ѐFx, :gS40S/ #9/;5?C'MPe3u959@i!܀(l ܧ)AYHqs 5xˇxn4}C3/}wg+0Hv l{AhmTsI^k4Q%o@iMO> <>őZ93C] 0֭ngmVLnv-*e`]ncM5dm%tB"JYE1y %:eߴ KH" l0'jL)խl:?Q|],k2jQ7g2Tl8)@$!bpQ 41-^&4(@uCBvTy%I {6}]tR&-vjyXˉWww`i\N55`M9MK)SnԮHZ ev |Ԗ yA^+h!<Z7S]xH'o(4@r> ?sEƻ9]~,Dft5|[ř6<9tحهGSH>2άZgf57JOS~OzgB?-\/T#!_:ɔF_H9n:ܷh\yPDt\Qƺ8Fju[7gBZ*$ Q' GM3;ayPHtQʺf{n\ -l*/\DdJo[7InѺ踾uko 1LhQV|"$Su~кQ3m9Т;J)O u7WB U !_:ԥ) &z[7:踾uk1y0n΄nUH.!cs %~6E ko]̛gɿm~B/I* vڔ-gEyH()XO}id=R\Co{9r!T]x"|3Yx?-Z`)xic13Wn$=>{4|#?x e񥏘V<5TK! Kx_#[jOϽɟ߯7>=~S&Jyfn&wiݝOrys웇60wuٷ#3G{p;}!zJw{n7 |0ǫу4y:ţOF!h!8dRB]oƖWc%a Xi[).p"bBRN"!$JkhՇ86Es̙L$TIw@)5,䄹5ɛ֯s:Jdta,]ggŭc7& ۡKtqbC{H(/#,G\Ǽv޷:S?㩚\:>(_3$V&ܭ׉JM]Ū,=zÐg?8ъuΝeW\ ݚ]7WXw{ͦwy϶EpТ+ |8]fs3Eh}`mĬeU yl'at/27{wiȸag| (; ¨ ⨰Q'eUb,}aԱq<(2j7R!A&@{w}2ItLir.8J"<א\[k2ݗ1! Ϛ j 0M3p 3sfܛaC.t.B\h2(gQQu$/$pPK) F % DX*h"@E0\Q@uvc'KJ`R@unuf1.:7׹pTKirIUT*#$#w%x%⊼#م9/وb#tcc W+#͛$q 5 h@jCΓ%_.ݹnvn>C~48@ 8K@ TBJ 2$DFbn D vncdN }no-D.ְ[vIЉtiϖ|[W;#8IT/nx;RYH^T.u*:K3SK!Z /bנPhAp٪[ed-Bf" #V$䒒g(A J-QcDN'+=0gck5( p~gƚI~ᇙ}-}},wd?2K X͒αy̶[u_u#Ftrz4>z7mn6#6Yϓ෽>})`|zjֵu̺' =732?\)6YG'Lz.i<[7v1YSO*6Aq'ĺKKSue2zHB[Kﯵv_!dzq޶xw'&].ZQNmru_2/^o㑮"\g oMWG5x^sŲ\:c<{Pqx;XGK̎ǷoVRكe."!X1X` Y%8Py˶6 /^gE iOŸԙfwZ3W8r{h5&" >*6#6B#Cы(߯E$y.|O[nnSp1θ|՗>XVf!FU6k(% +v"n Y^ITr$/CI aDTIንnUFfb\tDb[]hb}m)Z&Mvh|ZMla7cm!#͑slWU;'1A]؉U ù7ǏQԜkR.pYxj rS R<%5nݸt _rd5H"4 A'G;֓x(KVwVf#)h&x1C#;6s/HAs $g-udRsCchŕh&,)$;ceЪ/t4Y\izV$4O yZL:R(>aV0XƒQKW]bx|*:|'aZ=[⛩Bvc9/ @sox9}XG'f7|PB&ˉȴ$o$O̚b}Q*ćwE1qI'VmH"𞖟sfؼW923qhnم2<81M.7yXst2PE0,F~yu2,,V6X9<ҙv~O xV:!#ɾsϙ~|, \7 {s%cVwe^0u3c^AaPJ zIٙyONv ϫsxs5?xw^]?"t>il:/ ~*$}/lİs gg'C5[1O SyU[$Eތ9M"ɚn~3&Nn"O *}*L|H0ߛYs^c= &Qe)a&9gnxo[ uy22weYYHՠBHV?=_N+>Ԁ+XJ?h4뤬^I=leHN:~ 䠧z"5 5S'19p5GɁO܏rOI{])gZ[QF9r)0@d4,GdG6JkW!?)d]lXu1iupS裗jη Gy^j🴊v4DZP^Q7엡v},|D}vR٪VO!rƒ<Ѯ0}@_cԜ.v6InOj@gHH!ju0slUxHq7[UK+~jzU_>X {ݻe?^!7,XϓT]yžt8ipz)q>` Z9d -OH"khZT)҂" œʯl7MAh U#5aU &fk~E҆1 $=W<_<^ZJ)]0s9 ٽ` wn\x:d0ݶ~X'H C|Qax6hp!{'ѡ+DY%92nw{srJwlq˚Β4BBԢu@X` %AČ C{))vl+N {>wy&% ౳=i1 Q "\(yڢĝl> Pˡ`mu [d]PA{X(}v+b6 t ĞPrmǀ襧rڍŝG"lA0.}s&T-~*:D́$IUT0%w]鵬WrpOmjShBK/Jk^ǚ54m"R M)_Hx2q(i cgh:ha5 s!>$%'ʺ㕐3:T[9:hSluN}~<P =Ex.hE9 nfDoU_-ja)yQ>v'Pp$D/{5/%J:h)[΃: ݇/ҝ>7'3FE^xl9U&K׀,Q/KLJLM޵57n#뿢Th~y̦R[r&5,9-Wf:]G*YMre !4KUۂN"5ZOkAIGa"GU>vgc̨K$K āRD޼ 5¹gB27i+2ɐ؄¶EioЋWyQ6T OͧeIDj\1i. ӂb 嬧H۝vAJG+ƕ9~ Ζo#z)W/q&P ^$Xf>G 7T$of2@2Nd z5B-tJ3.ƽ]Z۝LV?Ew ͇n|ӏMQxW-YɧX#܂f"5"wnLDlA8QaBK,R{;-E+5Љg!\8d'/_dx!_Kz0[.Um hXeN '(b*E^2w?;.u"A1*gtL2‡*jR$Œ2>_!g;T%6hЦEA4I2%0a2S-;E5xY-(յV: '++Vu6k,3RGPu _R*Is|?k~EC[0ldog} Yq3/ p j[P-1JTCC?y ND% t^kФV4dV(u4z>~y;- ^U1pY,_뽫˕YlRk ;5x^3su^{J, XL0GۯFȌ_Fɼ8zŏ_w(] ##r Ů`܌I{O}glYH++`$a(#A%^\\92%s,k}ͻ%rK[g &'w \H4Mw)~~ZP<(_δcI!OM|]M33`|FPN1Id{q8\b/IӻӯRYzԂ E`TZ'{e+hA;H6.lՂ=.i'L)27;#2oQJ-Ș$wo 0DP:@b@-iDw#IIz^%)/F{חb# ?)(boՖ>IS7 fZ .d.mEf p*I%V6w3_=XSՉ;s,)%7v];aP&pUo lJ!z 2qul18b=pӆosmaq<`yAԉJ=?\HQBo 󻻇?N=VP_䁥9r4nG>o~Z0ߜW3/~׶$10qi9E% eL*ˑY.Y* 0vsXs5g$envR#2ai@RF8sԕj7CQwؕ5_59+!*FMfGz5ia&@GN!FLf "ju&J-/ ƅ&N9Ŋ%XRBBJ"ҥ,pXF ܛ(SAQε\q1 lZAr bYyX!d:ǔ)!s d)mɜ%V"HRh*jM92Ri4)]}tPx<4V"/2'ɘu4r4w q.I q-h'lFB;,.њڲ3Y(G?%a^TI cTc,*VA@yN), ؇J ;'RkR2$ b AATɝyn(8br)Pk(ڙBR˜^Q0kwJ98lS,irI^2fq ^{5n ?Tޏwu=a5z׫o̊|DÑ5QχYq52 nG! 3W8P\,ʀ/oUmQWwO-pib^(oNdތ_a? 7d*tV=5jo}r/?75j6Lv(SZ fqRE-Ax?-J.FnawJьQ#}⹮iNiKHo'2R.l&K TTɱw~pwn;Lw IyK+9XYÎvO-F,ZEF+ 3E8Wi-̃s } eq{kUƜHʬ6-6_MxTo@nx=^QUϐ^wQi1W~y.wWGo_Yo>}ǚr+bG0A5c &qOuoϊ a1Ke]i?;f]5qZ{֪-rV~4\E;bv% Z|Vt\BW|^~=_rxv7cMû?&?8x y.9{Η;Sק܈UG޼ ( LL#7cysPpB nodd#Dp%N7u'ѻ_PlRj#g^ ڂo1{_9qfc(zW?9x3:WXwz}?;ij6h<=W;민}_]] _N_FIvz 0|o؄,.fn–ABϯ~)fטa ZZ&/@$ǠkYOr ܜu_1n §` _$OK+FZA/SZQZZ};>zm8/ItJ"yo_v.xԼ$5N= I} zR=N9ֵ"%=,dATp9=,MdAB. 14rR' ” ;ljpOՅn_L%?(?vpi6Z>NuuHOG!Ϩ] ?@'s0knR@8զ;CfmZx9&Mt $р'^DS%R#38 G.d (&q}EMEj[Ϧ1хБ1cPNҮQ.䢬YTY*%ʕAI5.(t%) ˹qPT^nvԮQ*; Z/K+ҒolV8(ruQ0i]jL;FYeɰR$w%w }05=3v%Cީ.|`^P޷jH|2/_ +kyc˩$~} < *sn>>}LVs&we w3Yǻd1 O=`@2 AubNQauf+Od>? ea''>2yE)Hy_W{=V|2HhyT>N'>"{|>~Xa@/ ̴g0[ $%#+Y?ho$ cǛIV]_$Ka̴K[L+`FHY@7AH!%"C%NX?RF@`RJCa&QL8fzbVam)%%Ҩޜi`vw38L.^ a/d A?8XЪ+ש W;fRaRbzQ ޺_48"cYU&!|2| l[$X2_ >x+տk{RyS%#pB{v@)46ܘ8??TCJ@wo|vt(֓mCD5E.(n<;xJ 0H6;8Bq18$U^ŵ {|۵m$Z / Ic"&(dt(T"#~c!DqNSe$f:­WS9bDQpI'0_{K=S H2:6gz*z ZLrSq<(O`Mx3Mppt(֓+U(ƘH>DD<ҙi!}Jt*?45ѩp(=llx}.{K?se1e;|9=jSWm NY$sZ@xm,l"ˀ7L5{:e89 fRQCc#(Ϙb/,&2'.tP]`fI?ՒHΆ%1T'@x!Fۊk+.=ȏ)#f&K&O4 fc1$Ӕyf Es%[ gE/B^þݢZU*/JJwM~\y=V3)#+padv&WdW/n.RBA}[k6=~x7S)?h;Ɗ__棱ʁX#SPiV٦R)~^OLX͢8\wJE[)!w]0\&kE"86W)U"?)tz٘.]4Ts[彾Ւ( ;Z5b W~7;qa:y}xGod޾{&b{O>:ӁPA7%?w}sz FxErVdGa;c5]-p(n#X{~H;rv6,/<mHhEE$~|a@slJe`?Pzg";5TD%c yz/PzfaVHG$n6ذ?a3/ q<wwyA,@Ri HZ[;5TrIVb@Rk_, $ <$1=1AjQc?.֦ G 8qV!rQ~ V5$1UXqkTaE]%s)"ߗ`o(%*[V7|VsIH~ &d 4ɀIr2άJ2UhN!h6V+8nmTܴ}+PZ8LJ7 j 50%#LE2TIߵfkP!S U|kntXwa*-f83 $"*C3eĵ?|ΛJ!>DP?^Z# W$>_WJ]c#5/ `NSMJ9'C!r>_@#\qYPj F:M֟ߖi z,t$5_3݃裒[]L.#Wk_̲/ ._Biw)Rw5& א:}ӏϦҝ*.{L)^sҧ L pn= k!I${,ńH?EV7xvU Ox#b?kK tCLHכ ˏ̵J&N`~3ں;JeʚVc.(3)Y3 <1$WQTY 0.'7dZ'oiC _@k.Sd_S|ԅAߞ-h1!:xz`㴲4Xe8=+Q|b]Dc^1Xi;FM;9xH3̶9vs<8/;U+wjtf̧dzBYDhHuݥ6[,y:0}nMn.#yLKZLq(-DxʨH'[o׼ې-b)Lh[#SysE^/A9JtAb) {W Ŏzj A?^cFj0#Dfdq{`QW|07,5^|s+QҀX5Q<(Ni牒׉r~v? X p x+RjW\#1LlJ9ŵ&buQ06`  AH7}%igx샇"d{/o{рAد*R!6aFq\$ LԘUD7+]$l^B̬+Ptt]tfW˼e8@fro83SgD!t5j%􊂮o0CkW.tXL7- 87drw>B3m_\c4)WŠ[G X?&f,"6Nͳb<;nݾ-fWi2+ ))R#)G0ə@8U!E}&7$S4MZp}=],@7eyZ\Z@Ǔoh kh+GCzۓ˓>BÚNW:JK$1O;!ּRk8e(!ѝ: dQ\l|_~ޮhB+RX,:*D+lg`iQ9w.Bل&S v_W JvȫÎ JpDӋyx|@;x`1 vyޥ _G%<_0'E|Y!3)SQښ*z2u\ cY_LB3dUc;RW9Y5&rc/R4q>| HHAf6?[M"b ?!&B̛TUbyO`% .~`CUy,-DGt!l֠Mg8;x@1$y; zƞ!\g6Kr%.g8TT!RHS3琕`ot6AJ{4!U1ra' 5ak#QQBb)TiRPNgycMDϠ24[L#C9Β$$^ Rהʋ&})VQ!C !sKD@`pǩ̓f DC:'\@1D~}@.TѴK"~Z$Oj L1B@ [,&[!إbL )*͸P fٻ6r#WXr /}:咭mefB I_cHCr(bGg1Oh4OZR eQH6c,ј{0,єTt><>NqzO[lhd[搢(p#惆yQbĹHJLP2)/h 6̆2?ٔNR)';a)C~|C"K55":8iF^8v%SDžH\*cŮDT;hOXҴJRݱT2Q!_g.*YZԸFі"NpbwǢ 1'hd>Z Ӑ) Tzf:TL;g|HGcQSN{5ݕu'>#1WY6D)p6"sEѡ`eP%2`\&~R0%z, :{%:…ʕb" LYj 1qGc)@GVWʝIVKŅƳ6\DLJ>ykf@zxЊL^f6fb#W0Ր`MaHIccJ=9&F%F')̄L)g/mz% `nN;b4Ѝc +f[?2N3<%ncx00b XH$p WUb)i)1Tۦwj)"ǮDc!#e0Z.VqFK%H8XՆa: wdmt Fo7<]8)p` ARD*f+N'H ^v<8h~leZHT %9gKJxHetuuPk" rA|OaÃg9ZgOBb+~tJ,;<67w$|h 1ךt괮3 F5R*AYΘc4lm $(Djm sbbw28jawLaze'ۛ'!Opf21B Uȗ@NJ )9 g|NL mêB6rfOfpIT1zp#L#15`,T,yưzu `G'aHfx`P]ܨ D5`hujL{a6qmONib@2zᑌ3X Ozxo*uh`dXZ?pܵ %Vmy5dPq7 u/ {qqcb8 R1HHcj& 1]h=(^Q11 ^w-ñ @9^ܪX70#$Q6V\ZB]?w'#W%d-qgb,a ߾[KﶮpŵG7w!o^h濶eh<#X=Qnj-9E =M} U.qWճ-n,t\/bw^ E&l)PN Q؈_0rЂ 8[ UE<(Úc}#eI4Ә%5z45yOՋǧVH(u4zXn4rgv,?_m\$z~bm2gWy[ZY m+wE i1bR)}̗yqX>`7bu©d^FƷsbi6|/PKRLGB|܌> #hf0}(K3[`P2V\qG^~ bvw{+F@+0EleۀVxd5'u"TXsJw &J"1Ūl<*AEhb$ TKc_G` jy.R::_l]IvrO+r~Q"ZF`CF`]8l KSz.gUh$ fKEN5!1׍?<|:vYe>|;(\c)i*uLب_Ί?^)9c3VƵp3;|NL 2Wm~ l&e״5azwӧH|hN BqhN!A{ppz6?zUm_?VH㷨rA:յ?Ck0ط“=^Ӄ܍|J2pNgk<3ƚ1:Aw7G*ó>ލ0[&F&G>,|v=օD7Au]yWK#6~NmFnYE#+}S,lS"*Q3hP- $@h_6; Ǜ,=00HȯW%07 7^OE,rv7@ @[㟇@& IUGiRH畗;+צw{kև{|_R]e4q1?ٺu @.^>m(W~w,`q;/)h\?cC9kfM%y TE]BE)EY8Cy7 å4By<o) (ueX<\XK ;}h^]#fDmnN`\؏?Wחun|:Ty1"B>4]ɞyoӕLITFEB \3)a>2*`]y.Z IG&CoUcIJԅ5Z*TZҰ4 kEB>O M/W75n~JXQǝ)YC( AlAzWJb^;ݢAAeJE&鷖^u]'$r>ƚEha2aPD :zɸu&|=_Rţp#ΓpLָVyF:&.>h||ZmXFrxw{#SRۏLc"hGOT}2"VSV1+ ;_e(bgmԻ|]%sno۸RyVYPkl6hQ8 L#U(DKec ),UJ) s34X6( F1{Wm9z_1)lLm_@0K{Ғ A5N`+WVT#0M)F*PFTKT<U;̡mpr `Ѡ֞!IǨ#dk2kK=,x%3KC̱P)洞#8krPyRO4U|WO7CfRɢGÌI~9*[i/Bst^L| ge<]>Oz+H3P% m+h 4ծ`d"IT kVnn$E';Gc:;$d";^$q*DHi{| 5`x!.ntDQ?Ak ]LREZ1S$[ oO93gMʱxY[& Fs xy+]0(Usqfq;>sϝq*!wb` ׫o(#ub\.CKukxTExU^Lk.{فU!؆>2:RRXng{Pb WTEm;cI%ݲWmr묶kƨؠ3:QEQQdR^ۖT'YɾkՐw~wZ$(\nD8JgdfGeG@v$O+X+]' odB ^N׉D5JF,v575x$ɾ|k'ep-EQώG" G)6+W{4vѾEt1 ǥ r{%Җ0.,2L/w MFK[܉tw-͆c@O \EmU SH-kaԫ R[tqf ˙1!樝UFDBS,bޠp ˩*ϒIF !z-l5ys6X"xg ~[4ĿE+vo`P%|VQ"u nEAlS8R(\zLA(f{@l5Ky=>O^~pQzjQmS>|ux28_X?9F]Js-v:o*83kԻjB^k~PR0(;%Kk]=[֎> GRe>>-KMgu;L/yi=tI;;izJaCd˧i [S|HaKSEo?2?[[uT&lþ*L;tOS8AJqcZc蘔HKKr^ N3BĂ(p)4NKQn{CܐBXdpA/\8q"e#>͗-|\(^9#kqY:Z e"e#(\qL9hL1yKfƁR N;r@PV2dЎ6qu8Ψ\b0IYpQ|QPJn ) ҂*όyyalguc3Vz0ebN KrEb xN0X@,iWN|in#0| !QIKeB\0_H鬀JRJSXq>gۅ\.Q]3'B2ϱdY)RMbfarPW:i~*tM“!XD!D3b=3:Z(/ akMu$Bp# ZY]\ @x;3DE5qTEfŘa!yK@s7XHuu]yGj6MyQHn+~~v/I~T6ۈcpD~w;}B!/V[ꖔGqFY5":$qP+됋U/x*􂎽(VJ8lWbEx2Z^Ck j]fG Ue])69o839u=+:$Jj"jW,<}~/Phe @YE4X6/KJZ_? Ih{ s-H2mVJײq2W{Rm@eP8KG?W(*=m͜E}ֵH>sAī2;`FeRN'OŅR/轟'/S;ChCwk%)bUc- kGcl|8$`oNjI?00/52X|],g7~NNߊߝwbҕYfq1{ L^GM0gS#cE8jmTiЧp4$L nFky2<@Q~̃9:|.lg6E-zS|D~i /_&Ke}>z~L98:5xl8t?;b~:myKlR*O"lv|zl=jM#\{\η>OeZ^_ H O3T 5?0xNS*v )`BsA?$^R?d $ML*C&FJ,bDB$V),I[ 7ta4B&}X y >D/Ȼ&) \+/ T<AŐ۔ !1d2|Ő{~BSDYA冔Ƥpzs`ȿ}~j[lH % ~8(ukrslS, ~ إn!mL2b7&T[vá0i~ΰЌt &,hE-0AF?;D]#3_F??~C%ǵbwewdQߛb2XqTN{s -|x ݅^=uaa1 VSF|!7+pDv5nɛq Q۵ϸp~XM]e0??fji&p=t}MHo_P}J+ 2),C)¥2-(6#7HJZޥ`u-@ D,UNf'?]>ٵ!N+o}o߾{xGM nޙ-1-+0_ʭ,T QR0P+?;Ɖb UfRP! 9aK"cD]rNu4~kUdCuSsi??O7Mi@!O7?Vi`+IޘgLFS%e)jxV#_ٚM Oo5Et]ȜO7a {J:_X;C\$cXz_pKˍ% 8RcTz#Q`.KVӝ( 6uid[vJF#[ ~})R#YJ*)W[4&9$˒ LC Vl[0qw!mmdSFkZٞ *#4bG5&i聂 'X4 [ivmߏ(d<ѯk;$ǮK;`,U~&ĩ*Pe?ܳG]{^jr{^{8Vck'*Ym:cru+S )Jϥ-Skg1B%`j;YU)PA2ey獪J6X1v~6X z~ ^^bźvĕ ay~1C^~YV]_03,еŌd~1]__Hֵ~1#__H88_̈W/&XJ)Vƃl8bT8oU#{Bzޞk%?h4\=iַ.&$o9N~V?Wj{MvR+U&uu J3+VE^ˈo6=b={-1ggNh^k$й 4<ג`ux~FکIתl*PRZaY9}IuX)6y%9=ܹtkuF?EcM=_~UwJnbX7 LGuXƉ ~ 8̻ZW,,y\2n5~Yrv$E4J}@Bb":m4n؊SE;nMڭ r)XN̨b16h|Gj@ֆ\Ȕf {ڍRb16h|![nE[r"%SB~b_W3jX BD'&-fp*L nmHȕhLQ=}jǃ3jX BD'&-߾miq[@ֆ\FTa_i7FbG(2v8ŘtFL[+("g߿4v Ht"h+knHf  8c),/ F",Y l `7.$/[X)ۡuiuk@B֒).u [] *%jcESxwY۲Lg.d⃯Ae8.GV*SX=֕O9ޙuukBB֒)ms(Is]{Mz5\pG~5\Qql%\/eQ^E~58VF5HB/5 cV暭\ը&[$a(<a 2kAr-H@b n 4 _2&59cnV4ǗcVc9&5AvgYQls@9_2sQMmmRs17 R1kY1sǗc@s17 T1ks9ܰ&H/Ǭ9cnVc=9"s17 S]#êc~9fckQc|4o1s csx9r S1<>cnT@9,sYMx=9 s17 B>fN's17  CXy#ʎ&tcau҈I҅'סKɢЇ6Vpxz7?XxÙ)fa`F3A;~%HR4*DFH-DVrfciCXwWF- #QKokBX:h!TDK.KB쵎@RbVZY@ !y5a"\Xk9,3 -*Z0̗BX+w&B! $yFvVG"^p5L C8#.XQy B^Be1fdvMrŴcN"<8 Eqje !\|b) !4jbv2JPsPfl8Lx |ekAHi~ !gd «ІZ= 5q{r9k?: %;ʁ+.J <}}H{9%Z,W}3 |22s(`6֫Cߚ_ӑ>0]y{bh\[|y}uqW]e1]^kD:7j}UVأdn2M}Xy[)~Z?{f<^Wg8`ק["H[guF cڭf.V5g'c0BVY0#bfuF{Ǵ8rX \'Q0G~:jrtp>.NI!T7dZϽ+_{5]Hтҏ.Y":jd(OST "hE8klgZY  z񶥟Dž/P1Ux kxEו.XR^Z懨zIHrribxH uW?WlŕqS[Cc=;h: tr5 druzr"O""0}[wZn, $"N8B0@P:v迟r^ھ+Ȼ>³6gkn zc͍@QDdI+:C%In-Arg i5+8<~eׯAӑU@p|K|+6'&i0H:Onj WPs$(_^,O hvNzX:lR˙A+[ w[I-DeZ55"5(]P [ 4$S}5MΔ/}8 zdNOA'j{Ĉ V[2m`i:yRC_ ]Sz齇i1pAG;9;~saL'jX/JYѭ>Ͳ k쑙ˌbgx\Toy-'uUY vS`0M~Қ|jn{ae;K2iپ S~Q7+rJJy!.MߦwC(F_z&.^nB| KorD.'CHr7-058_y\U[T!lE8^T7Uc{3Ǐ%=OYQWu( X*Ѫ5T6L]lF~-ZTSU8ʰ^b3)C9/_,2iEPJ:5*BДzC`#+A.7"(/j6y!iҒc^>fE=#9E:Ϥ뺈"m&Q銣].+Mrǵ ؐRK0IETSQQWȥ5E Ҵ%Mӡ8U6+=ސ `7w aDM:Ej`:A8XGl۩h``IkrCUiyHߊD$Jq!TR,$ Ӆq%# 뀭(*6A qGp]*<7< 3ŭ8 8 a$]db_GG`aʁw8 @srZ(b&q1@A0y66:A+"MUJG9Liʹ!i#RxJdFXZxf<nD#lA@1{6VUa]U)bjw77@\@0ju9|UӀ@!BN;>@P.(Vf% 3zƒk˦@ `x;﯂dΩ9'.~m<xn]7}xZ?L W`BTַ%&5a>wø T(0.Mw2r*޺{sNt3;sRcC*xg&z &jWfHR-[#_ ˈg>m<@ҢXDBT%_(xL"u8ߤ"hWuUl!~q6ۘ?@u}t0{# C8D@|Tخt8VÐ'(;xgE7TSͥy{*J1DlbG?Ov6ٜz־_xǓᮨ]Etu[m"ײ=:Q#lg!+A+8lXĴ#aLkeWe:.Ei`tknC-oߨ!ֻj1sU*y-.+dݬCMP!Oa9lIxz ܵ<)F#*OпVҖf2w篃<ϦBjӇϟ>]wg>yF J% pB,}ށMΝd:5Hyv ]Gewnc!;~H7;Qaj35kW-ͮJR%񢌓M'RӅq}0T7&z}h`: t o qu泯A!!!HCtMp=uqd(|9m?ۑMmf,l@AJ7쥽: K# fx~U8ٱz`Pw~5=,L?|A6>9 rհ`KDncO1 N4zK;A-*߲\EޝYhɧ6~bCoFږ&`Xf.8c `@u~]:6cw ?(rnUtLZ@1C@>Ѣ&9D`GN{xo?Qes%^ < Xzx9LE6{u{ vc0*0wm 9󇺎 Z`v;Iir~n" )Bϒ,%YAh]V`\UXVږd]nLMmOhΊS1˩ L6#8/sF:PX(o1b5* ^ XԊ@` 3!uט=@,# Xzk`U^E0)hSp`(P`c]:++$qSנڇFAG翆 g"#X1+&KedIsh X !B HC)?(-A,p"veX9Wn5_9̺U̴LNN%b`npbG*/ ^% ktm!֏i6t;lYWcGeaS=5Czrnilݲ,C$AGo =K-ۻBAIq#mޗY+ lli(e(#Z(0W|abSp6.k+{$Yw5NtEg*ŷngpcZ%~bZUZ5^7zYiEmtAE )S:+֊{JcGV;MiunUiejDh<g >uW.rDҥ4H$Bڪl^ډjei84 eƱb[d2O OI>FlɉaG\BZX[`  ZAAfFb݊#f f-5ۨ |R,XVC[o|B]JSUG@Qx4UmaȔ,RP"_Hp*MaIQF":(= ;bv bDeE'QV+466%T8=8d.ɢh?fǐGXG(gy*_ؚ$L5*e)˴/CTU՘":T+ka&^P^TiC)H6{SrXgjASj7bz4FN ɵHpu81#kXg'i ꀕФU"]r0&z$"f&N\ 1pw1m/,:N~?cGLwjyu-Vʯ&7d\7eحԷ/ķ*J7SW&ߊ ~q\NS*bTMn6*L?UokP?7:+e}/{ܣ LPp+v8,N!ú}]b~-s,q{5צy,IYγt6MEcPF.j ˸[ ]E:TPіĒL]u6u,g0a+y9r2Q]Q\D#i-otQ)(pYF. R_s(No"ԥ5@K0xGS̺ۙ%Gbfnd6wױjtFXAjw@F\^ұˇl@</('9i^ q\ru9mݑށT2默j)?W*F+S 67yvo/]]~5vR@R"Mt%݉({{ IjȥD9Fk, C|dE1մ?)mqÍK Z9Ùt%{5ΌHܤm]u8P Mt%kjWj@́G(RHvꗟ CbtwŠG֮ÇLtqd|==wU]mo9+-Z&ao&8`3|م7$3Wd-K6n#![&bȪ:ry\6L;%'!(g\ڹeޒj&$keZ44DK"R+'c^;dyL0%H5yK;́Z1⛢yt'4*l sc/ZΝeRB -D$7JP,oBENO~ԝ ~}y:/]}qK'6LJ%e^ҳ2WO() G%~Xvi凃k9jW .3gܼd."gv ]l+pċt8=ڌ}N-&g70%% ypκQ+*>a~?bN.җp߸y}"$ӎ Pj>DtB(Q} sTߠBfҌZVY-KkH 0hy?_u{^l ~yoK*<(_E;:- ucMȜ Iԭ칍Z.\pCJ]ȶ!_ >D쟠6qeєajvl8mB%jސ&^r1o?QL @hqRV0@'Y7F;gU|{UvS)p0Rlπ 5̘Rkrc@Su*]DϠE{AdQS%21mZɽm(#4pMxm7~1bwrfQ=CC BL/ (S>,~dqpC=WDUjSr ipՎ186ܩ^pXI`Ci?G <'|Fg0% ZMWh?w١;'''7YvtB5$(9^mj/GCIeJ**;T:iizF0?jhF)A *3Q3E_^`H/O7 ujpZY"9$AT*{9/JPdT(=L9\J\iIծfiJLXZ+v؈I9Mmsc%;#yLl8/!BdJV\QN DkZJ֒Jc9d@X$rG;N#ٶH B`-z!A%"j9:el'@d f@UZ6ٴgpz,L8}P_<6[} &hE\Y4~8j;N'ߝ&|`h^vnװ@bquzrBQCO5ɓTcz:g|pWO,3]{vT)u{di)TeKMp]$/.m|L<syH.a9T5+̓|5Oy&8"DQbq^7'\٢<|nP7*v9epG0̯k3ļHO_ߨ}rOo^Z4`r]Ϩ}Q} &.Z}L{8t' >ora 6w}v+Ym]Ü2\+eF}RKq o10ܗƠjZ(VSZFy-c>חx6Z7=櫜 xfe&IK&ɨuIB$j'qz4+EO `IQ1݁SIl/ иE;s̀ CZ x5|Wy:›d Ys"KLSt ky_6Y~8֊1ƛd>t۟aY֬z=eH1Xޟ-Zmٚs p**+BmXPc'sh99c-c0odWV23w(5Ѫrf46&4Xv,2ћLy$dPL T@s?ωThƼ8 QcSYFa޼Shٚ,Qӗpes=qfJYB۸҈wJ?w.ߨۭ6_̖Q!5ؓ_2+'/\}"؋evX .Y99=}{p7/&cޯf{җ9~#ukzKD+qAhŎJ}J1,L?^T\x",XF=8^YZa@H Ê_'Nȡn`/p1vR8f:gAl QDMʑb#1x%msDY,vhh*9FBjxLyݼ;Lc =+$Q]~89Ŀٯt rI w) 8lF(N(iG=,˞e6Q Ml!aEr F ZLz4FeE1Mm=2`$||RRݫtKdڳto<F `4[LvbjFS:⒤?E÷5%:+;TKuTPX%S# H3Q^Qw-LO/а€7PB79} -ha>uiY/e(Pe K͒'6XTJV02Ҫ1@&6V{g+z%|*$1k7ekɫsh 0?mNR `4:Q*?'(K vZv>}'"=%,7nΑ?V5=DO*Ϛ'?kyM?kw}vYQ4% P'RP>)a#LhM$ Ti9RN=?.?%N__*gQI@:f Žť'a9ZX,ǜ,>Ơ&gx F{RjG ԳDӾ#>@YF]ϼTTtr\z#\LCu4nV޿}^?O^?Ofz}f5 Z\`)]Kk0BS5QŒ>..@TCf߫"ۥl5l(K&wӵv] qwӬNƧ:? tԒtW4ZT1xb22_k*%C 8GO`|WN#yTsaLjB`K 9ؕ`33bc=eQJ$z{aɸ8i㜧s6ys|E I4xAk#P洠Ȟ(!\`ƁrÅ ]E-,Z8* 1TH!*Dd_fX _FCuXҰ<::a:3`0N'f񗳋ϽNЙ$6^'LsJW%K$\NWVv;9P7WȔH19蕽ٝE6\J#\ͨYiLwো;tK Ff~ `O\&bR>g2˭ۀt^KI ~WV%UdO>{\DB -*BѠAGN&WΑ>o5PPCFZ~T鑯e|ӏWC E@S~dyLs&uΉ54]C*#hiTSi"ml4z%2Mv.m hْ6 %Z%'_pt6wHAC@$FA/FR-%HT1&BX"ndrZg[zPnhuPC C25MM$%m(I ׃ܡpWOƺm!5u&/IV<=7 3r!ʃPNjcE4$RӞtL15{ .)(6g6 Tg#!3ݻg%NC9R/dfv@} C2L~@&0B3M[u&6}0!iFjSg:S,)z|S@ޭY٢ŊZ+64ٯOiW[~҂|<}fiDXssccrX +*WhEUIPwYǭES:$g'[d5")ϟ_ B!Δr/J2E5T$ۈ &lM6˲rMw#+ǝug# Mӭasx1F?3)Cc$_odRg2ԭ,QqYeJ!|,uN8SrZ "8.zn#/љ!gy6Sh~[og8]EyGwz׍CtZ ) ~n1~g*m4 IDAtpޕ Qi^@0afi>wh,Z2zr=8vIt~ 8i~-VT)|6*I9RH-~<736FAuRڒ (ʣRmc18u,)` fa_7pcd2O!§̿X߅Ϸl @zSfOyvdNG6 ڂvO w(5rJ{$5 PBk\X-ZK|4O\&yڦ'#Yݭd| L,y+c`J1 DjV20G_9^Ie;B`DZFNk#hЮSWӿTt{[1Gryً,h-49:FSsRJ^%)JRzUz+d`Lx,=+I+(z8%[h(CiF!g/Ev+/j 7#2r524=(fJyHƭ$<0%p[2ɼa Qjʨb4eY2' ܣ)dlQ V" iR>iE2iw|1~J!t|'ML\7;h5T%91qfb$17oU*ӬqȏGt4[\ s#lqLIKh+?2i0O>j4 fVR1 By+XOvd煿[dLSVRЦu_F޺u_Sm_>ʄQ 5ڼdFBRc}N*2\ S9KF ԇɐ~gBAuM6~3B4o`G)FsN۱|B8 >Cb[q.?2bJ0P.l%͐LT31$JR8Zo? Tԙ5QHvJ3ٞtGt3>?ȓ skMp^Cx_N?L>wGO+xw/xtpδ((=x oσ@ߎ=>y< QSEf)^>~wf&)>rZ{[Lu"{KT<1]aίG?2n ,Ԥ0D/Tfaux'}7QՒA 往>:_`4N_*2´ѓL3{"!/.!)4I' '4I6ˬad$!;"z*FaTKzZ*` l.M35̳-Ƈ=40ATKgs,D 6 nG"K_L3n<5lLHSVCwW\A~}|?QդT 'ki˻<;-Ҕӕ6X:7!)7~g;!Q+rĩL_R؊aunL+[9j5bnC f^kxECQmIR. "+I!9Ϳa4}'MɔpڱB0`Jbe%Pjm}![?-4y;x`?;ynWr7YuTҺ"7%|>.쪔MG:qaכwm?I*v3q(W2ڠ#CFؘSQy?Ya\epUZWeP/ @I+GC`J&RS JE ^8\LjlTEm_Ԫ7#8!kw_Pp N uT q 'e),MYI<c8_ZL#w󣛽y| /wEx.(gC Z{xk]6t΋RЮ;?UN;Ie*7gj`d 3T^a"1-dŇRH^%!JBzUzeg$R$Г HP1[=3ԗy}0Liu)#ˮ,lVm_~r%~oْ܍2XNg[L7djӯU(Lw3 Y:*X:#KPD+TAU*0*mV%!JV%QMG%iJ\6URI%C[h *! 07Gtk2YJH:|p(BQ] *;ɔ۽l<,8ռ]WMqZ \"z-[ЭolZn<i>Kҁ5!e{C+Z;ɯ؊bjowU.ɬ8iGZ"b4?uK#uf:Vps_nS*32\;G]h}:&iX ~+ ]\ǐ5't)8{rɣ!d$-P|+Trbm~6F9{x=!fHfo^"L gT[[F_ wH p/U,rp nޗ8c{<'q?RK%Yl%O:̋-z"Y| `B ZF~<6~@(vB*? =}EhN%KƤCHq06BN'o m4k0Gc4XXxdHU`hޘ5?EYHhc#1վmhs H4b`_ ?Cs>;8r`z9ˎ]ୣ?{ w?v_7]\^z\|WxqsR,[?]UU"dpֹm23^+`E?G+_QT{6\E#)0wNs+k2Z8Ͳ:Dx405;P4E1j;opwh[&" Xzaiqܪo+Tg,'dϱ>'8ݥ%?BzO3!_9,7+Gΰ&X#df3aY q ] }W~.o%zR}z޺WTBdjL0CƠ$V7S.se]3Q2TŶTYB-ڹjStѺ۞0;p8\1DA:4$Wfu/hեZflOej 8KV(i+# v2}qZ:ό !@)yRdwbjZfuׇ6XVZM\ŸB8@<,v57AJ%hJ=-Y\T> *Tü ŕgC{O\>PbG5}sƯy6:\Wsջc_#7n]_' @ZFiױݟx g (Dr|o/oѡݦl<\sC!;/AYrz,;taO?h[ww?^_E2з!hS U@V9ʇ͗B*xܒȾVom_"WѭGVd,fIȟvH^wXw"4W {] 4F"AVir QMeRjhz2 gZtra7`jjz?Ias:3XL=G9X& =}!VBK>|q9M  h^,'}Ѽ2'J.f')&еo\Fx5q4J47-_GTFԴOVLVLVLVl[yjcq1u HΑm tZfvT/k `n ";!3\V@4lf@s^jCU~UEۿN f&ـ܇: QSR[ŏZZZ #V_!<OaŧT* 5ǎv1+fRanFXH\\FRדqC0F.*`eTqBD )ʝLK5@fGsc#ѬupBE0#F{[f X1Un}Uо?wa~>2'' p n3A^9Pג|Tu 3׉=h4e5ݷ"_Gߪ$ (gM+gZqڳG2Ύ`Of"n~8رRѿ `וԠD41%8+Us]qzgHKGUQk-h3fV<97gu۴WZDwaOk Y|^5Q-/PboM}ۙuu{'wt]+F "\f+3\-+K!Tz\r}C H#$uǟ@"[ XT$ӥw&<Bg|Yn-pF ĩٞ>kefCɜUǍz2v?/Py܇T~˯Ttal>}HeG5zB[\d냏FΏ d}x_VtY 0F=CTqs͕{<Ε\QuQ)u0pUThG>J4U;>ˮlNy0lQtSFtC8`ky=ȕx[k2)nLzd2r4׆W1xNA}vu+2G ${9'Oq+mKW$Zuek8OSn2Gl(eF7gfNԚ^`;1q<ڊ ޒdR򖔩=* V1(lefX*A)w I@:ri 4MӐ2UHMau=]tr~J\r"[Oޚ`]7`e Zӈ I.H%MWV*n('[a<*&Tp-:(3=eB5]ڟmǨQ@6"]wEFfV]2n H3 x]^(S ew_}d-9HDv[CB9Ŧ\;e@in2g3x+X3CF:' z ɒ1M)`4c}1t?:)|77mW n}z\R14ҬWU*Mfs mbojY4EJmb' QeK aѱ<222mm:yD`l S]@$ʣFJ 0HM6KiGEt,mȚT(7-C޼I9_ۺʼ@$9<B}z _)~/^Cc/?z)/&D(vzL)'*DGt& + ߸3,}hеNA%(~%9SB 5mU}T_/4R)}]$}}j䒃q.( hA[L*EɽC&+FU&6{d im*.QJuh+hbdX(b\`S S L_+l{@z[nk3qpS/ڤ01``cl|FUWi$_Gr=,b.HRUGU 2%*@kKV[9͸S@Ƃ.R),k{xpbcH޵-2M?{ܟl f-^y~uY-ѓl] FiЌi|I?3:u"l~ҬWliYK ]mTL]ES&5; x4]Chuu&h)&j9]r<փ M@F1!cۭ*/XmY%k VHC=uC,|wƃuEN8whqEHi+Pϟ6z(Of^O5yR5-#M:!3E#:n9ts$cC[b>DG|?=˝`ZപcRiWhE(ӦSv;LkU&pٙ L "/)`erCZ $26PޛَD =̣wqqSs!T?>yfbǬʛH=lpFkO#g(ֆ O #Ųg|VHy*ϧaEemގs&*V1s9=^-0c!ͤ**&˒UYV(baqFtn*e7G0Pi\Z\ЪJi# 8Rq*) +˽(aLi`A XNk2( X-|`¹JZKpkK0wm ;1w߿qe-v ΍T3x|NE;5;㧼0v9]œ;~&O}3LB`& #dQ[o.bF%au VkKGT1ULk%>Mk4w:鑗^ `.qJ%@!aTʽF~c"eepCpSܷ'_; IJ2̋Gp{5z`n(}Bo>ma1y/i\kv ym4b0֌ w5F$QbQ;JlQ혡XLj-ߕI⭎r-4xt}Hxzy蓷\d={HF_tm̈dY4K =7:Crssi33V/|t;&^.z|t^ !,~#ZW[HZϰG])vpo)b 6 jyU%cp[z^OK4FjY1PfIZ`OV-OBJ趫+H8ܦ!W1}Lm*L5"~aMVUe8;nAX Uaq>Qr _!6W<,_d鹩MMŖ&ΈJ˴|) (kP> Ҳķ1W׼['.6ҠYMJ]&Z?κ4 4KGC$Ƙ1 $p>R);RU4P%Up++$ EC @\ۺԡu9{Hldn]}gKOJ7^~YU`vw$FP.<ܙd/yg: F0ٷ3<& x*z`y2BlN Mf̔)`B-ZKa.Z%H?Nk9q8^_O;]4|HMn~oac[o}Wi^{U}._P=iԤ&(OJ+elWBmtJ/EYMZK[ITt-ry5QL={MHeNf׾Vu.|~.Fq~|-c]uKl*RjZr@Sh-5i e{~ӵށHQWr!oYWʄU2଒N(;dd *6A4]F%ɂn祼] :E7|X]Gb'Q9ƽxvď#@lS|˧.'E=GyS[~V̐SM(L6QҢ(STuJ*Vj#KVK7+T0`T2d+7Zfp3LDZrI3͓J+IBTr ( ;ro:E ,UPYQzWSO)U.(J5h-pݞnL4&p gh5'bȕRZ Di.IH4JT(nڅ l9 [DN_aDoWwTj/G˗?]vvM  &,{i@1)'j ԵU[trs7qEellpz/~i4NdV=LV|5{"1sR܇޻?v.xU8͛'D溏/l[!{-4qS|p"f <.`w-[߼_S|X2s15{s A=F Q|$,s\W'8cX7Ƌy3fʛ.@$Qh B^ő uZQl~#?h z PF o BO 3g"+2N ,ȸdܦ.|w3aQ_TVb.FHq.DtIi)͂ I:oeq^V@[ϡvhy^ɺ>B;E4>cp18x?Bs N88YAx|v/GV"?3(_-ڰVK3W twwURn7pEu}Ov0(kFmMa͒¬H MaONj+*}3ct@6G۫ L(&qG }r " !IESj7>]9#8wۼA!yIΐ!}u H}e~ɭf2)_}~$_][oG+^Pw~rAxi!mlPT"}I]F$G\HL 2UUUXMfK]?_w殓rZ\}xT;sׅ9 O( ՝_~?.CZNO~V?ߜ5pxqGN/:<-;:Y N?N0ߐ"]Ex(}UQQg//Q읕-5{0h1_[n@5u` Fdjz-=lՕxWi-Tl;,cjf_08c etRդ*6%1q93z qC{ dFLرLU Au#`#&%(i%]qo߿0m9KȥsTE|>[Yi'HneSjkǏk93_ەzszڹъϟl3޲<₳Ycj݁l -/Z] D\ت_uY@!TG('H,xq:=hK9z'TfJDrg]Q$ыG*˾Zǔ EZVAAe 3M$x ݑ0 Zg_.:T at ltMo-+Ś98EF~@<TRrP|PF WЩ-d#Z ƇK ۡe}[E˜5!$2*E[ZŬ -zYnE. %C)ZSMG;cMGz-aR9~> -jOK)t̚>;f誼2k 5FGpBkf+ύ:%p+=ee`. 2ꂥP8)zNTJXFJ%Br2O›| ?f ,:M&(+_Rw?K?kno糫ʤLl,m0^CJ jɡ$HM#QכjyItò7a!-cNzd,t|gEC˕1p=9)>썻p#AhE  ދ@ӿ,( DKk,jexRڹ'"BDA:A4o6p|Q5p98Z2^Iͣ&BKc9];# HqmkXM>CnKU]FC;h.`0)U/?=؜$1|M)JD`&M^5 @S aX /EJlj: 8rC]s<gXz9":{Q֤U0L՝yt!:q+h`` 5JRmk@5J~cM!?&Q/\I. ِ͘j{»hӺj>8IQҟq;;?=ۡOcVf+EN>-=%LIM댥OPA7RCm%b4SG#Է"&p(8t~a(RP](h F`pLisG2Oi Y.n>Kl1)w}?eJʕ>= 7]ˇ{[5EAWSwPd@R+~Ȋ@l7Mw'U_/n!f ;5$ۓ3@-${29׋nw9( e9'Vֿ\^[^&1:Q('㺟y+m4LJIyi#`77m:([qJܮs+)5_)WyB~lk:"΅UNM%KyR5X0Wg6܀Q[<@D j炘p$xSZj @ wj.HU܌q֋s^8%4-Y/ǃePpZ3 ^2DL拍cyj astap2GZhsreXG(Ɓ#su&in^&]ԋc&]pxt?'oOnDN'7oOn.?E{u%,;gLML^i!,I]?3zO Y3 +s3V!JJG nuф"(n m/6eC#gt_d4^ 7M EzBx F4Z}T;RlD!AN $dd=WX"A@fw[А_^#/JR#* B͜AqQ;R{H@G39L%n.ycO]o7P"Io5ʌlJւoUkhNG˳kY%+u99ZidmUsGm m jl#:$+a<_E$adZ==:,P-x?|JTpj轶̨`IZFx[߾kt O>h0(T>Z'FEb$q1Fgj?Jy.2fRs𯮋*}Nk++Gk. v{Y Jד*"њܼ)Nɍ<9dnK‘+.|+q^ҌdSQ|4*y=r:7NШ_{R^=FNσ_@Q)3v6[P{nTF/{<l[}Jz@g׋tQ)pYq%3,U4fԁ#)xH(vG/l(m G=,IҴՐn$MAh-dR԰-v+A>JeN=hm/O}Ok࣫ XqjDS.xM૖16+=P9#\rZKp >bكTٺf./ вegp f[|V(}R'پx(2~s\e8E#)3#$8N]4ߚCÅ(AK+MLUFҚ؊:,&onaD3C5cxc吩X#O*,VӊtZq<}Ϯ*Ơ_fOY<- WF+P@/{4_2W%9~˩JU6UJ uw̘58O(JFMъL3Es3Χ jAEpEL јq nҭYu8Kg[PSN8ڈ_E9m#vY6AE+dM?55>R==tu5*- #a@ЦҏC]Ts* /I->@OͪW7F/R| ~\/= JLTQ\]ȞThYT&j΄:PoVr3R 0daT cĥV1h-hOgJA!"PaS_ƘZZ(tX`3{1~aMKbr1Wܓ};>m98afWb}\TkYej+g7.Ps#cR[C8AFTR0[)`5$W 0HrC9ϹN2mjX4fm5/>-\xZ>}h b t~?ؗ?ev>_~ZYl#ʾj^\c{f8ǰ_XRzw~V l(Sy@ٍrSϏih"& k`ZGyI#4䍫hHJmM1'nu1HQbN$YڌZ&4䍫h-=q`֭&ISv.[]F[UtKr6vqdUk+ آP1U |D**qR~C ʘsloyحzkekAP ZexKMSHɄUjIx>l3$( Թa偿4aFLo6Rj&z/1qu^:!"AKueJ|p\?kr> ow={RҳHZB)Uu'Y +&j4)FN2OI kI LWuYz8?jv=aBX)R(MKK"?Z lTR0}FI67vW$+@i6l>]E=P (,breĄlk!_晻i6f/_jbN I@~ac|eu:D8,ha2Ƙ4"s\izuzM8K)/]xVs D9N e[TzZ"MslWgrȋXb6{6RY)_fjW2 5|EvN*v)?AhK"ǚrLB O e똆7E~&0co1Y{ȴt#ˀ{1`O^ȞcX| qHey!6eމدi+w?b4_&d]&]) SP3"_țh#o6l#W4EƎ2gciA`,Q$C&p e W}QJ5凮"?z:O\ ADԞO2¡d 4-4wgaxHD)Nr )1'BeYa !2 ylāOߨn6Vs|mQkLyo݅rjHxɑxV[`[."a4 /wAH7;SUp(iwdKFk+B}3{Ӧ#np.'=Er7!8~fv.wuDVP渜TC|:Τ&0~n+>Çm)6P9Y18L,F:IU l(Z :"ݻS|zWD&HSqA Re<` -}ϳ[UT YuabHI+ehUuWE̗n6+~EÕ Yh2x8zh)Lgk[j`q$x[7k|n%wT{Yy,/SEG=ZlGeqjr2Zaq]XB Kru7iUDG*Q#v6(ƯzEu*99B[ʩephb<[sn6hOSl:(k޲OjakCI@e&Mc@ Y:GgZJ~j(b뉓IJq= ftTG)9sM VB\br`M7aWa1Pi)ëZYK>bgL50h T LKԒT.NEI6?ި_S}J43^N `$W-\VDig+LgrN+L'VKHғM[DH2# #2xk%LJN<"L:D,*(Kr)hP딪0 k+(sugʗYNP^H)', .MR!:BqVZ`A Y",Ɣ!CpDL&J ܜ 8˃%Y pV0`;D @k¹<-a[V KeJP /ؖ Pi\ KmN`TÝȭ4fP&2JH܃:rK6vMFup JRHHZ\j\6٭H fɱU9`_2-p Ұɤ@-1ZP`px%[Ub4rIb AhrBVgIqՊ`,5c73 yQ\or<5 /:5lP5>Zt CZO"Ώo9E6Upջkڴ H~n:h9I5PP>3Δ, (̈́$Ь4fHZʈZF.¯:LTDL5'!򅁮?]v繘;^Xt.5e˞ZO eW{לvSO4l}Vc$U|{Y_wm]h|YL 5Bʹj%+_\vna'tǭϻw1a-ҹO:fͽ9_*֏D=D2 pn⋫Cq}[`%g7c#,p)_+O_]n8adp%x8D P#Ll'J1`hycTwݴ%E]2 }6w9e4yc9mo{n3K0i-#G)7rwV(. Gd yݹluK ;CWw;J/zR L[}Sۻ=m%&zmQQثG}_;eR%V׎ƚc(Qt^X{{kQ VWhvC&Q | aexVBR}IlКsO;=* Np4 QD@Kȳ@2FPQHܚƕ&x&[1NlM515 !J2(a46J!m ]쇬|e \u$#:]:8p{Z!;l3?]U-EHq_`4}=a䬳<spgR}tq*AwW%\ RЬJ*ZeN zj ] qz?ZjBܺ?ĭSH*'n-\|jmLu;e.؊D%}P _%}YnPeS ?WRSv.R(x &/}Z\W0uuBqV~:g쩿%(..n0n߁?qy88kxY1)dS k%ec۱u-JV#F7y%/VLUcO7rlN 1q$'J{Jz E?F)ljT.)xP%;TۏOƀl;MƓ л޷O>?,7-bզegɟkVe `y)H~Dgw9<Db o6b=I͌5Ŗ4.~UdO˛ׯ1JI7x.Gwz7le2nR-)\GO򩿊2*=m&o\ cHJ&(FVE< $}H}6UGQZ.ڨ)k鮠C`[G:˜ؚhhێnC-.AF׬cn8:y5ik)jrp :ZhzfRHVɜIe0{R1uyß;/}}G4^\.8WOw~}t@\ܢ5JMkVK2r_oȗz9xFލ?moV3.J6V/T%mz廉nz[Xz926H`f9G6l ȱb ׳<8Doxʭ4 hI]!~pZۄ2`62.xe@'Iօ 7w=``6j仫{=+Sr)Cv)<(uw,2u8R͋*Ұ󗔅I'/:EyW$չŗUy08Heg1)^sn1eJTyR peO7VK_F>cR(U@|DISL&f wCeEuV4 QNVf뭕ۋ=, aක8y7{j%[ηj~} P,@4'[H֗aeEsq`6j^~BmSŧL؁ՏӨ+b̮y3&楺s#_Y`' n/qFXz-Svw_^lE#=]}4 +_ii5g+U/|R(3TgÛLȪMmMk4bžt/oKL5ص5uOT+l镝Eq)+eaQkoƏZ>&2 L 5v𺼴 b|{Izy$dQw䔾 %9/9yA9 $(()Az+cH9<9C֠!Eps|Ed(qsϏްo=&Ӿ/z32iQjFћ|ny"~}Jscf%odGs^,M(rdu(Bce]~kX)GfmfFV4"[7w3{(@n+׃k,'P}̾MQ=̃ZTJ-Ɯ+bI_p FңI$6`2{F?XQshb4ԐP/1h!rQrʲ^<֮xN(}|]d7Тdk&''+2BLL?hHh *?;A*o 6ɨT,;g0yeR2͜ CLE([& vi$2 _X\DSd1% Ee`8u0H0{~qgwq1N9N>=-nX~IoqlQXUz~&ξF~ʌo)htGk+ZEN(RM`KzA$Bm~cÄ5J]z (0')?H*Z8Ңi|u|MA4dSd<Ґ9½66;U-oUh#د-NP/} DP]j[46-.YyT]NΧ:A`8WAZKzd k I`K}$:V"MnݒNo|5:)c&&ץGJNn20N|퍻L'.;_Orqv;v9nolfcS|eF[4t~lI}-&a#NDFwFkUSkc$ύ% 8}2sGҼ9 Ѡ\2W ArQOPi5{92,7> /3LHC1qmlB+!$2x\$[?Ѷ;C}!lYO`N! 2-2KLgWoI^wTمhғtVIIȫ]"9M@b̈dQ%* twNJ%+K%S1jOoZ(3yO[؂ϛȭBqDXM )Lgd-bL^ZE' Fp~dEԢ8f{v0:a]϶}ս/_O2ޒ9O*mtAugBn/&ftNFWẅ́ oF<[$eV"SNU(b0|O re]/̧Қx;f5h}QcRl @Ϻ8Y 앧5̭a.n=w%:dݹaDwW>=8B];_>1TKUUf<~x׋?0W"]'hh![gz1A5(c`OWL=XL;v®>6\J` sq'zXovPxY˟n{=-?~rYyZ9<;*{IC龑+ vI"3&b0*3+)9@MH1D!іn*&x\= @/(Kkk0$[`$A16D/3ST*AMA7d) )&`J4`Kep(>heȑĂC(=Qv^`P΢=T2RPa2rd"j9czS0B)0^gY^ˡMdNgROrdJeM0ELUʒ8{>s}АAMIl=Ir%`I,8= rC{Z;UK(DR(1&<3" W3iJD T# #]Gj#6<\R+1XsΒS"k'Zr2AHyĎ4)\jIZmj!=M,'pY(g!!$w,AKN;u֢~I6l,mbry:ܛ<|ܹVO8b%uQ!2ug/H9e 3ra䥕tɑ {fJ9<JmD(-8EJ,q6lӣNx e fPPqG fȃD뇚ƃD\퇚A;I-7%#5 =H|^RS'I[J6~^AnGʳP*ۺvM%mc[o1hYۚ8͇Wzt<~3hR.q>:.Jo&􃣲۱ z%x}=}ࡲ?>0p2H! 0O;)@'!R^rxH0IŐx' JxvA}ty`IöԲ/KVMjdކ]/ٲԠ ?KPw˱7r->IJǒk{∻Hq' pļ7RV뭣#>:#uĕd+;g C&8vTvG4rjW CμЎ4@a#;*s`5\{*wFc]{ ~xٵMCZvlu'ٞݤޓmnRMm"};evW^@Rk7Vck~r:bTg !8Ly!s4D@ѨLP6R[]#d#vZ0q~DK&Rl9_O!`kj>5NA;Ljf|63E䪺O3b \D~%(:fXe* xZ.|`>$1Ut;:(֛ћ/1aGd]Ң;$ @&? @h$ E{]/ٲVDKJ9_=1:SitEj@Nҫk*sA E8e\i8N}8bF4h[' CjRS=<¥FoefJ][o$7v+B6/QH طXU,؀g*=խD[l'长xx.<&j5P g(C Ff> 8W+`]E"qZVOl!R]"=K1,Q,Ex |- M"u0ؙL60:pU,Lx62\اVIWgGg"7N8٫'aިJQ}b81Y_}X93ֱFm҅+dWFgvV_|m6Xxt{'#J.!i#Lಇ6ڟy%,`l2ضΙ6ltjO ]7ʖ v>G7ݳxkt7QQ1ROz *#h0K^ B@ʹW6Sx ([(DT -1%wW -zMS6j(DV^h*:3wQ(6zqg$ކ1f/u&}ip<[!qбw,]ZVpʰ36̢5nu{[&O}剻JO9q3'ެD2/t.u@퉻%ҙGaIS6lmfjmֆ&bG!N=lUB eG:3[CLLUOϦd7]k0/Y| pKQ[L^q #mǵ%/تo' )Ml7%SJ!ĖxnȎv 9b]> A7蛩0O4YaRx gj*L 튄:1"Oa>ckBz#V.UmV^=UTD=wInohηjP4ﵝ@1ht lr h!cHfԬzHf!f# <0oy2Qe`b]y8O9Ύ' (j{LtP. fh@ 3 TIbKl8lF?00 0:QiVsV!NQIx6 nr\uĵWeձfeMf2$4M'0*Pх![y"g˪~L827)nJvKPwoFA֞jBHRG-ʈi#̇9X[xu.vl=/v,A Ƭ^z[U?_D:mUNJX۶ߘ#;G,C&ĥNXJ$bQJشVao˹]"Դ1WuɊP }Lݹyk5o*r#\8ԈPEi/s+"m8d* ]wEq"sel }klxDljP֑ `V"<` $X" |z|PO25C#3UZ^+3/T /CdB#I""FDP~k95ZhD{ ZB+,ۍm:9CV:4*okZ1p` QM̰`$A(xcq4&FhA9M!wyz.c74Q *&f> otU;mhT]U2S6O*}iɌLCTea٪m&*qDSZ,a1pK23D쌹C{8?7B h";G}[n3>[LR L[P. Cg@}c@ #|XjrD> :xM $R8a=ay0FxmИw1D eVmk:y;&'1n? M_'~X% szrŨٚwZ@^PʸC6Q s6xmUn4Xfn{p1VBb ZXJu_%]ڙw]-5-66iWv ^ bu<5^,F)_-X%\?Ge{zsҋz}kFC.Yjٻ.YZQ.Yw^%Z;nSd uha_9꽋 ِ:@tqd qL4ah^onCFo 4W@QNbx1liݨ aQ0i (z 76*;`/1V2+I<4Mˮȸl윦L\ ov#|٫^kdnZZ Х=dנ8U_|'ɣ- &6]&moYAmެit )ѵ XcGD XZpMbbENOZQߟ8{ܻx v9uKE)k٨~9-|}hJiڪG>}3WOcqL߫*7a2ߏ/RGI迤zKz[DOwǿbu>qMo.~*$*6 Le[UEʞ;`I $ne4ѲPFjW<|>w05,B͖e"?P{v=iٓKZW,/W )3~=Zq?)~Jtd}?M)N?i=& =d5k1>3@ƺM~?>feϿ?M/=v㧥Gjqsz cݻx)Ory,f۵zlZFEC/eŸ&"TY^VG=BzTpvaSݷ˽vNDXvmv9JޡrnY(G׷'d);"&*P-Mes.jܻw.?d;汻.>1 5N\&3f80&2jQ$i'y3 EQ#PG,6{u'`+ SK G_@},CW<)L;m]fR.3=r@06Gaf>>I[&ʦ9[yU}8]x[ s3o냎&?'ˏj}vm pmKG$aj-hH֊4 KG֫FR폼=Kj ϜK/BYvVT]Z4ݟv̇us۽½O/Odo\{>@bm (C-9E;Mdafў^KzIV~I`2VfZVZ;:.E{DYOQqW]T!SE;FP1QߟlOC{ͻUnW)4[욕vp͜`qwͺk]}L4n5+CM79TlK֎ o3炙B%d{mbii4Q}$)Xn\VP/Y=)M(# qqC!VP 3Wfع2\z)az.³XШ;O|'N=uC9QFqTB891: @.L.A%$u %YNo]=g&ѠfDpM1'gAgA)W:SJ7PW}УB1ƫA j2LEm"jvؓgO~g G4g]|k.ؑYb%X0jF@꜃ ^CR{:N@E&>i4qQR":,D-3`1ݥieƣܺG(a̙`I HFj܆Ƃ:u_5o ܠj.#C8׆>ˬ^Q{˦볮.ϼ˻Uޭ(<,C7QxX:X& C}vq/<셇L6B2 Єk&O=Rl5,xЄkVZi5+Ci5b 2rjYfUT@?DLҠdv0^ (^3lS|[Oc?0RSLgҠj#QD=ѐG:CLӧ2ܭg!N.R4h62s qH~KifLQ"(4Jq[6P4d磾?E%w/;_&L4g*aaɫӖ/r- ʙ(7%UM="&Y;S1j ;ۉE;Z䣾ض銗v/HWuxˬ> 0"%LHj' #YZu#ҍH7"f>34F9lm3P{\=}ɜ7f Tk{W|&?dنDF20ۧ(6Gki憎fJhgčY/IXT"X!Lc%kĝЅ&"2Դ̑Hjv; 5?affgCz?)~έ΋&K-;U5R"-Ąϣ?EHDŽ=&XL27N/'-D-l|Nr)j 'u܀2w<$CwCס&OEXs`AnlhSyfMnI둓Z35aj)ㇵBm>-CdeE|v̚<;nǺiW/¶)|)B-L#y+VԌ(ႛin/b@mQi֯q$)!A &kPCm4f8fRk-&ze, k ).wӘqbІعL[Q5];RCl"V/C-Kyj5DwԞ=VX=`nTO{U[s iOiKڿ/eݗs^%҇;4u ̥?Fo?A}>23n|.~+/>?ů`4_ B'3δr:ֿ8nvmo,>i9 5?{Wƭ_N|㡰/C:I*T f]rKR1H"J8!q9klW5qK~֛Cۆٛr#]q1K-am}Zz<ٔf6CwFjCs}^'9 Kg055*K)K?OkyqXSy2jQjScˑR"/l ,0/MPJVm*K^BΫ6Ap!oD`\ uRz: ʔy|Ěsַ|/@J)YRa] jV8.q!ȅ\*1VR)~̖cAPK8EYgG{\rzmŴ]^ikKAG4ݛrW2.NWINgst߬02;@5Q l/_h /Y^c kks܊nڜ^~d2QR.0Bw T8w%DT9A~QeA6TYI&UY}lȟ-%|drtgif AD;AS*A BN/Pr2U|? 0z6N4Mon¥;q=O|1og[X,d*)_+ڶM\@Rj H t` F$$+ a.Uj9j|!) tl[ԼKSbHCv2Ka6B惠fPAZ};!jt1ǒh(9>e`V@b;@<띺kq@CZfh)a(ʔA(F: +K!e>29c u1w| j$"` Q˖IΆ&jF}j)r#󆧣Olp0l00OkU`퇳^s% w.dne3x2xSIinun;'[>1<-erIF}3H''1ZlstS3KذjIO~P=::0FP-#(\Cc)u=[̈́ØYHN$*>0@؁g ba>6CM 5q2U9A5cqkn0VC/N71j,pB)11# kXӸ+bW : ۮ@]nc2Xj͇>J0XmQKFb,cE tj,jT&bgruVQvPA$'3SZj>3FOFDH;zA'D=6#eHL0b8 Od j9"1f؝nZ{=N7D͇;MT j55yGI**͹Y6h0Ah;NYN2-ʊQg Q*,9rcf\*04r.~ C`p gPJ aYDp{CךtXl7s,|ds?]^ զѻ?GcY,ޒ]꟭N?x x5Z6@ݬoM _.'G^ ( r 0F5EO9O,]sR3cG%VF|`>XlJ͜ϵ{͓LdNf$c?we%ҵĩ'յ`AK3eľr[4]n`)`̬FU*#Ӏ ݛr~Wm5ܜ-Urst* ?|HKGޯF&ḖۋK^.6:yֻ_6ɭejkszGFIXo0 [+sxޔ*C?,hֆ*+D* sCu^0m0M˩,O},!t'ᯧ>|/1l4D vt + ,dlTn4MonŸ;q=O|扷-?K' $A)VTA)"M ")b")Z)X`y9v4E3Ԛ /ƁjT]m'G% noo!m]AZ3njК "BTjZb8B(?7 rC!^%fipipZKEu:~38KVJfZ)ӏs p@j-l! ) 7X!I,5a-,)OpKw]OgԭKϝW^I`/y['f/\? QKVK S)͇pRb-yA+4iP/-ާW:9e KW?>9zGjuiǷaWs[ƪd' UfhB\ ~YjlY欄k|EUmV>Fk6)M!tUiםe70j?U]kxĽkrW6/84x߸B{qզ'էj!|j,'i%/^gQ)fG큩j W-ĎS]ӏzG7QJzM$ .͉`rI!ddvR`!]Ȋܺ gq%-2yn`)0RMG>7椿;FuRg Xv{|rռqt.SUKl bL9j3#lb^*dKeKRr~M7vo XC!̛mGSakx%ؽ b!XiyWd=W$:ݙŧG4~ެrΝ5^}jƨv|A?'?_?N~mOnyMf';hNz\F ^kzit/5J9' ōjԃoS^u]ѷ'}hnw#-"=WU/f~g7I@Y(߫0(,G% CCSѷ==m&X-Xhx~",6tBx/۟l?2:騼Á1+4CbvԓCԊ.܈MiXbUS~+{mlZg i)jIoG=9D{Ǧ4[4 b)`87\WKcWOnڱ>rXsBtv<{?1Fp{,o+BQ]`ʧ 9SΓCav&Hs\d 0Pd0al NϾ|4˓4;ٽK$Nv>01Z #Ԋj<U҇-s㇧$69n$Q*!29fue fL 2\e"ݹ+JqF]g59qUMՋ#=P n1_Xœ2B. A`݈BՄƬFk$uO@FC`XF J'\6.|1{|UlEcaEz!W'v2)2 ';}aN5N u,jzX}Uݾ|nF SCncd7ZmW;PK-{-kiA/i7R~kA-5ѝ5X7=ځ7=MOMᨘDT'zPTb9 uoV2b:*VY=UG A]V* |*뭲V }ָTgנ'>]y-ıBkf,[l(n8X8L'vf^UA"h0i9Px>PsPsPsPs]Er;NR!T,G"3g%"XIi kr14IJA!>^\&y,&pg&D(G1fnBQ~uxn(ZɏbMeH>n&NVR9s(l=o?`#KrMc)@m1 ʼnK=54n܊qۄ ]0BMpkCMܘ@ 0W;P Az7i7XVDonjowX瑒)܁j]AjjUi5؇PiCu*LX-{ Թ&'6V<`^ v?XE2#9mAN,UKӶ&TlׄZN&WƇ+5VmmU#%.R՘m_ jP!VXm>۞D˄mK%%0o8)/9鍱N\;T267~vkz׆|UW仍U2pUz?|9@/TRejCsBpno'1S/f*`4 0ڼrA"CQGU20p,Ű*cy'کw凪NHiƕO=p A-j-Rn! !Rjy&3w ,?DD|J4/$Ade) 3Qn14<J1qd<1ŭVdL,b;G(>:m[ j*:zkA k7Tbjrz%+cXd,bZ1L_g 8.[@H"zJP E_.Ԥ9@"2 \a; )Ȥh42C5Gó'bVwDu˛Ϲd98%YG'9l>(Zz~ld75ֺ74|Ro2wA#*JwYK[.kAwy ={..YЀp-U&p_ۨ3_PbݓP9_N^_?JhPLa,zײr6O7nJo{L}fFfl}|;f[=NYʨEḁaˑŒh}V&ʝLݭ DVS1<@ˊ6@ T^d\x84-o.P+\P0Y-.vZkx‡;p\Nc 'O˻s4l=]~\0|QȊ%OK5Jsm|>Yo佟/&[' ݒ\MwoJoqv~]kV*e^?*8 .iE)ecJ XJ/88&07yЬzDDHDd@g&GOͳcK^rra59!7d )eR9mJV:p-q8$'NNŝ[b %D܉ E I{}lA eE O{:lQ(i/.ZK0UoCC֒P2/7"Fو1h5%ű͊p-Y<8,(}`c-5WdM؝#ȩ5HiL=)>DSnFL*}#>~өE6E] K$cd^x|aYPVB P_@-<;(#[9?f`xK=pP V B`ZA9V:Iѣ-pMe:7`z +{. e pLQn N(v5lX~(/ <7tn2Peq$2d l)BJM~]9"jC}4ddZjp~_~_!H#VDv55avR~_#,Lr }\jqleԺ#qM}|m}ǽ}K#e^+0b&EO=QRl]Z(| @F*T|v1ZScɉqϭ3= e7ƌA,yfv_s-OI^yQVZ"~_eq0\}S.V(5{ Nq9or5R 0Ӡl`3U`pSTjj,59)iKcPu+Tc^Siz.bq.G, NhR{Lv9EGҕ p>|qMTB,vCG>5;UW8^8C !+ssi|(9_#m&x4Ff.Jjǡ[VK^|;AlHNSXchF?u>7ϋW߶,dgKTJD23nL.,P B%{N*E1>ΘV!%MP/bfvB:J}m0Ń2CۍJirMsؘJIM,ajBYpn1rq# h(Nؙׄ-2j`_|3%ǵVDv̏Y|\jݍ"~XL0# lֿ_/p?峲y/~ qs \"x rh8͗EQ#z(*=̙M A.Ϳ]GTi.=]l M~ac@ `ҋ㇨EYۋptvEuPoAk)fx18T%jG؀yEU ]9et?R+9"_&Ol>]n7n𓣑* }497YC q.bӡyG]D l叔A4KCyt wV?y3}14?%a1?z[N-1J= y9oAjx?œ\C-&H?/+CAmW (o/6@DܧSlu/Y~0+$k1!%rSq%0zHWGG5N&Th9R!9" =VYss_Y=p$,jκ6"m'iCGOkGC=ȬC-0C5SҴUT;ʺq~Pa?W;PJB0 $kbZV Y5o t޺x"Y__4}G yeXM ټ32hr֐]pK_PeϦ~Λ]{IusgFWHYޝ!qmS&]nMq:MQG=4SK5Dҭ h#:v-]&*(ݚ ut{L/BJ&ݚGJ68/΢[<٣@+v䨨~+id倃ިtzzP_މCP_mC-`} i_BzR"ϔ*i7JHA-7#I_f+Y #kaayʄ)K6a nžY}Qe Ȯ"22"*(!D8v %>N&eҡt(ܪܲp0 ګ3 tњ 0UF**4 !4A4A(Pĝa0< rB48NTaqvZo3XV&p pbPXu d}*J9r|nl/@amC!J[Ϲ ~ڪɨVvSeC!黆)BC2V<K^X6ǒ> [JTkᄉd50Ay/|c%>뱐֌-$1v #$Bxؠ:DkF[GӲ^@'Kٲ==mV'zy] Fha$IBn&/ANO}PDڇKC AɠV&gMB`1SB9Yۤh+q/Nj`i}n_Zܿeϓ({ͯ55]HåDZݱcaFny7/ldL@r5(Ƙ^B`A[y9W6D!ɉ CNw7V(FU: &"TuF܇E5B1E E,9r#DTI&xgj?UD3D+W0Z~h?ߨ%͓b: p hx>硰賡Zq1?PXM1bٞנ* kCڞR^{T=:5Ǵo$|㞊W@ W@jb`7d+Lh{?ҋ޶Blvro2Τڗ\?Τhި#ՊS~F]Afϗ zh7\oI?(E%{Q]l{mWJ1͖6,}{C tTs0 {u#\Ж.TBl2a>ʥ=͖-e|"_Y_,e,zo\Ddj.{8lUPbP":Ep޴[}lvCBq])()tTRm|jPIuʕTݤTs-hc^lVrcpQpzjeVO hHQhHh]܌ y$d[ډjɘs8'zM+撲-fHhx.hB66 shhL&1JTȭEK\U\Ǚ+QEٖd Q| x.sJu*ݕ&ZQ=Of-:$KU*RRQ[DvTT8+\S6jk5(Ծ ":/0/ cE9('Gx.'.61jXޙ>J.1^)~&S 754)kǟρN%A)'QAn s,ݢe"$UISiH;myJ![:;j)0ZZί.6Q.VOU1קfɠސ.HI0κYڭ4:JVHFy=3^@qnM#V@2`uNC*Q~]Co ۡ zjO[_0}*BI (. =QA_ecN4Npb襯EGk~ĺ h׳jI }0 0~ zS}֎ڑT+5էZZ̡[MvyۭZcm7CNj%_>vnCAP <5vޑm^O#I NiNjuU9MzVE[f[bLDFt\΄B(m6.4hHBstJx .s$Ab˼:@i8k<&,0obC ]h%^X6w,`TB' O 4 w*Դ:McvU!# 3+Qj;9ЌyqL%~7x7ܓ"i2$?^qj |mt^81UiimધtGt+;F[ 0?}>4ďnoF{_yZZY"~B OQ<е:zW†iˣb8S}ed@n=o>_Z37(fg; j+=oPxR̨n۵ AL$'ĶٳUc¤WI*_37P&pFn70-F?mpzE`Kk١r?\ޡPV헰zy0%!IK>-tbW,x#O[sltɧ'k;o\ mQ XhgL_Mu <1zhI΀&BXDkgqtl!4%`ܺXjZ֒D[#d k5P9c]0uӊ0Ěo㴌/nj868i[ϟ_.:wʈMkjaqJzfR뾯1asG3# {U`&s]i&vą.gBG/;>*iw~"ۢ}?+#I~!U,[})x~JLͅW^ϻo7$fֈz3z7/X]O!b?PY*agOֈJ]3jj8O43ǃLsYpS/s!J4.vCQz(rXqG'"ʣ\ .8>]'Aox($É@'_z%XP|5 3;5:π&ٞWܴF8@jb$#Z8 ?7fQ +@N4U)F }g] Dc!hd9Րh}5=Su?L 5o_IF"< 1QIx{)Mº .RZ/mN(j&iv]FIU|*8ٓ˄8n [; DLet]D$`17_zc~X Z3v9m"$tk2n!VTޭ٢ E,ZW.b~9Oy/-5%͆ӑ7jdzNPЬ\a/tz};}{@b'DmȎSObV;"'| 4I/W=MT"Yz[W1AҦ w¶~.|jc>Run.Ckz,neJSϩ-#/cB'8@/ _[McԻ&~ =л5nv/ij4VahIbSPg $HƝ oavD %e=/-`הEcMT/QmZ5Cc“lLX.1jgM\$iִ,2~vSȯꓷWw?WF> hye']! -JTJּD(LcpEnyԀZ b돩v3r uqB<1C[?D5A#_׫;A~og{w|e| yv}S'?\^~fg;ߗ` #cO#k3t =B?n%n-7 eMiohw͞>ݵ,\}YML}DoFZ.>]w==Rausؙs1M`uѐJx{t`&*Mĵ߸qUp >޵Z@O5#pFD:' (\.7T"0|`n%jjyLx</(\R@?{֑ Dv_ aH6F.3/BSLAV|$J6>0`x뫯ҷ)9(*P)M-#9'8ar6EX2u98v*u%3JM*DZYT Jg"ۢ|3u'F> P-K-Xk Y`i&dPH"E6F0l5S72tWq=juar^gξd{]}PpyyЇ^9E_]X]Hk_ GZ= ;\Xta -p)m $$%6 dH9[ VRH5qUH]{~ϔUn1Ι?R4V x_Z4|~*|Mo(Cpf[icBgΞ\ʨT6!@8N@'Ղsp7No\lhge$I=u<$ 6R1iGpJk]TFͺTYp yxZlD׆slRTt2g&ġsvUv/Ūb3I_ft.ܡRs( ɿNsB:c^!KpB:SRތE-r*r`(~5ӇC#_%'mZ{/ 2ܡCse`ة1a}0j8M҇7mza@!W%=Ƶn"5E疓b0I <0bp| ij"L@J;JÒGitMӼ(Z-Q5mAf[kP[5QF F A9G03 :?pg'$jJKOaZ&ڦtxRfD0ǿ(9 &U'W @j(ҏ閙*gƢ6ыZ?q~y*oɋr7|T_GKW:~:8IstYu},6^|Z=g.ĺEɲT[`F$;Q% d+ Xu*ΈEuQaePHؾRn͊h=CTj-E JH W$DI,5Dtm2nS.lCQ#h6AXvO -pFc-XfK^B_6QD;u6pԐ\Pƀ )ZzmsOF2yGi:9mOKsugݼUg3ʊ68Ɂ $9q$|rOBonUU7u^[ ղETqqo߰Z_<.*ޮUۣvJWpnԖTRR c3]o&'Q)ƬJH9/Π,ij̜d Y$mKcG`1%QFfn9Ms( k3wxlӢBǣ_Ϟ2,I~n(95Z[$HXcm~pL:ͼ|u~g?i~F")[[΂+.l[OF y+1bH)9@S" 䖄 u|?x^m뜉<<_W 5!.ۧ iy:ֺt|<=٨o2{jEqryF'Q9wƘYye^.[#kQnWpʹYе[ ym┣`պ)wnC66nu5 u!MRoȪuN=ZV ֡NkX:5 wfݚ]׺GNNYAЪuN%=ZV ֡NkXbw[Z>))%RW+VxZ1X16Q춷nN.vktحuktmC9E8_Ƹb$CnC66n{`u bg֭Yu[ ymi=&uNe{jcBIq-:'BQC5IO_yuHg(FyUT7L4|Z~rq˓υ3~}Ero+VʽRnkLJ-*cpfSBL%Z$)%_U?rUܛܨ7jmh[~l.:LbǓmi=]Kc/~/9x76I5"TT'.$8$'IvE%K]U,jR!G@ȊҙȎ#h{j̓A|bp,m̯~W~KJD9d'&Ӌ!ͣ*໎_헵޳v4C)%67Q[}݁;w?)F)ZJ %UR 1"bRLa5$<#q'<.؁ ܼ˂g ż@"y>|.ٻ? MϖX((CPsw"]LDTр|BH]JXsn^kNn.{&oz1;eo]rz~ίƷ79%Ͳfܘ9w/]˷06'RFR\NM\?׷+4:EAdI^J6NR'^fuU7\v[С+)E g|4^3k㧿ӳtӚ<] Omo/< g2\Um0&eTc.IP*ӧbݺ,xS-G$-kEB69 }a\R息¬{`GDZeFIHIZdD7,('*ڰ1D ŃuFqR)?0X1 VBNRghGe_!F?lחggBu\{J#EV2f<:/7G^PAW¸J{\jtqYΰSvKX ESFA!xl."oJe ZSͼ` `*H8cK`P9\cV%7¡eGRJUh+LdE2sU*". -,g@4: Uږy+N#54=Gjٖ)#۔Hހ6kRH0 4pD{TiT PtzP.*kL!wڄʸ夒}Iz蟄kߚB-8~Q}")dJeII*KP\ \mh+'TCO9U%i!$ցGO(e JeQ< ;x8c #|Ev+BE:fщBed1Vd ziCCM'jv.5DHE&.`Ht˶ +^u%}#΅{㙿>pmy_[N,o>|;۽8]nۃxxG5 IE7T$/@ضuBp0ψ-(C'+cذQ%]HQ)*̓- H\*9pEBQ £aV[Rƃ**< hLjO7FUދo({(.6ܸڤJpʅclQYmWdǨ00/!9leF?Vv\nt,,dkֵSF)9TfSU4kSqGv{;ʤY\$W'It&'I)#MNڴV,u4ۈK}.f-IDFz:7aŬ]OGY@J0{J$>"VqIv8l'ç#QN%Yю@M"jzU-&ʭrmdS/-eż.3,qq'K\F_'S{C|2埥b>E:?K˼ =a.JZʭjt/a&:[_14+k_M ޸nM-}U.F2W l9 g=U?R$I*$5תlԂ~*2 TvHeTvHe[PH({EB#Af+31īuvxv: şwlOEAd8B (!ܼ\BzɍE99ﭫ$sk$E+%K/uy!KuRJIv:DoD7_A|I1"J63á[ٛk'pWJd&rYU0`QG\s971(4\Ŀ] 5$}2}SNw>e: +]nt?;='Uܵ^:4W%1PAk2y, I ;k!Ћ%IjEZ`K}!h\)S^jF2R:Ipoj47tش&_ٴ W.Ȯ4 ԏt nW]U Kx@ӂqE$eN4teEց*?BW:! rJwQ1~"rݚueOƿsϚtSwO0(B9Z^ 羔dw7_=;4\|LjO-?1[JFs'.Ιv1{T:Q Uߚg9, P bܭ L(.=P*×NW`C8䃳0I ^D" )L(eqyKr A`!CUaْJWlQXYCWh!C+U8fD e]gH͛;+$f$"H*2 ( $0Z8* \X`!d$ДZ*Ө}ITbaSqNd%jzU(Q&$8mӿKդ-mmԼQ@DaWhY9 [;_tkq5 A NoFs5!X n~nK!iT1}ҿ}B0啿!ͼ7!<ϬKvVUȪ(mA1{og3}=c~u8?Z2獩\HC^h\W﫧޴nZ>X\ bT')pi[@c[U4K}Da 떋A#ź0KcfSZZ.4䅫NE,G1#'8Yfޞ7}}8t@Ӗ'SR[6h۠ "C/׿]ͮ?%Ga?u1կԾZU(FmZ{}h BIIe |]E^&o|@rUЙOK 9@ԆЙ2t&!)!th5h$5O 5SٞlxOW6m*6O5~@$Yp0NS!ch@9 hE|w1o,5W!T>_uӗGLi>q ( Ph?V-T!lT/&IH۝ӤntO(~m5cnr?pt}"oD <7ǶxF 0"JTZ)fJ1S)fҡ ÐQ^֣tƖ>0[2 )߸ϔj1@$]B`uJ?X2TF OZ.Pb nc>D;"H b^RNtް(psnI-LaZ x ZU4ȘID3sl Gd(:i epJ .3`NkZ&TYP#%>a,ωbPk]N:mK٩դ.PMU5) =.iWO@6R}C\@ Rb^`ˊ`l)\1`qFJq}L_Y"(@}ti;P''j]Gui ye<ԌG8Ej2s% +GjMZz69Rr#{9K}.5is _x-|ٶ^B j8Ɯ#Zs7:94LFZ]1,~섥IZx^\nE:fv2OƯ)~zJ:0c8?/5݃拼#lWُTC*>m%NM֠D%ku IԐD/_SBJH/&)4-LQa|"pA1kGS I2IjCD?|.Aj>}n${>44 B 䇫ERhIqVZ/}mJߥ/?fӥ'Y7]H&x)h,|ge-umY9د].Ń.XfS{Al>-3iz3yIi7)퇰eJW])il`t0J(KsV[faRAgH8qוEWQp& A7woKkք5ʰ흔@[mhM/!]|7_4Y^\^vԃ5xGơϘKJXTSVxuБ'qp=II89\MN~QӲ'qB[.E|;NbAc o*~F]ѩުЕ&kʕxx? }A{YB=gEbN0 fRF4x+r6&WO  ~ߣE J6[a 1k2UE09w_ 'Ş?wy4,hYȳF.fӫ*Jx+j*qM jH.,sKVRVHWrTzؒIWۼh6,[?22k({ 6')V47mV~MZf+[Am8WСV?&8{MbG)++Eea$>AȠ-.}kRG^zg0AL*z8pRi@QʊGK Zf [*MT7ŕٴr"g3(q@A.KDhr%F W@!TBiY}+×䐇"Y뫢44K!1RXav  U&&K;%ɮg'.p#qLI>ޡGtnS d}](I*Ij J(RnH<# )J1*>eG뻓4$?SO;/}9 ﮶mw5hQy8 UsPC#J~S8}{סXsG?O/ʹ~sy =.fw!-Sׅ54~{t KLj5OFegcSN/'%Gqm{\E DraY]EBda8yP2D˨F@+tBJEtӁRY71Ij1O7th20#5 j҄G} Kk |h0%$Wȸ4&96/K9<`BeOOwⷫboŽYqx`MۯEn> :?:y-tᾧ#zZlg V:>!4UZg^^LIT/sR uf+:F!DlLW]' L1/r$AOn} .^U.瞺6 VTzOqEMnl_"5Fiģ;QٝG%5ZaECK6PHԂ 8'+pԳpb18Cz_M0 M0bԞ6\<~'rSc1n|[}_]"9DLCok|o8 TB|p{ocpc-x(=T=NR;u9A8n4E3A$5k85גJ6(ӑHƖQ+3]j5QX\֕_ygVP$Jb 1h6A2aZ8[AƾxB- + *+,QV*WX kD聍gTG-8rnXI| }?̾#\ZYeEc50_X;GB4єᓡA+7[$MgjoޚJ?\2|כ'jrw?cn7>a䟋{bmsN拥tW;{͛ɽ>H}Wvl<4ld~~î`\<r.-{KIBpM)ݨeQr":ψndz4on/Ɛ/\DdJCvk"tDA>#Fы 9vK~P6|"%S? ԰g^ίҋחҳ,nwww_Jjߪ˻G񄠻w~yEe"Ql!sZ(lCf 9Xu%9~gP϶Q+,WN_G(l=?G 0f#hfԦ1ĢFQ?#5 2f>F4cc)x(Pt &C.104D*unqҙqR*%dqw%ȴ*svΝ Q_Q0))ʆ5\a5?ƸszP?8媚4x!GTȲhOO-TYhK0w?nMQ;9~+w\M[AG6d\ ,I`SMJ 8&L)--YTHT/5RDxk4SLl9vHF"\L)t9,QZ6H +Ҷ¯$"EmHBf']-l 94PPdX 4U;QE!:{d6 nPqIpԳ-@,Egܔ^ۖ=$0̧SbPhkBAfEN&a2MZ<4A`!uZ*&(p@)5%F^Ï>(aasW&Oyܻ@GЏ:%)5!Ns+*d>*^N@֊ٖf 282դ!CM"75{ % vw#g-ǡd UϠnc>VDk#YY &UhkVծAJ(qˁX]O *9apsZPf V,YnU@IӚQh *?Sㄞ*qꚵN6E=i0+o%ѧjCN*a;N$4wwlk2ؠQ34Fgm 2~r|샤cC`J)E)>? (0h@Bv4)vՄnUKk|qBPuKݴ_E55hƢ@:Y41'8N-~p%4x&r j͇P')n+DCNV)ALӔH8jO9/?}tϿ_^%̧X,:A<mOjE&7,<:bd0 5& 05YY:af! Z6,,f`Ft!2εqΣsmjX-#O?@[ntnt+5O& lɺѪ G lFFhPjLJIh \fdBб#PMGP록G"Q ԏЌCm2 hF6,<Q+%Y%Wn"\^PNCr~>yŧ` bvm#4Cjm+*R3hDkhKZE^ikѽN=cQrs΍;`7C 6qySFHUΩ mةVI*5i:mJ|EXoy[ N^ 5jtp7p] *A09_1(jh{cQ#BmH1 x#-iHp'C3 GNPBL%I_B\L)%\Xfqh:KDkkV?d~Z\nS?zwvoέq__r~_nN^,^_\4D8zNL'xsji#rm7p겚7w}: ߼$*!'Q-=%bEf;+"{iltϠKbuiwmu=yݽ-W~rrW&wqVs~faY)QQ'AgNJφȃ]NL'\6Y\r7WK/~u<'p~G8&!_O^jN4FWv~9$cWpb:IoMo?*ޞr[;n wߞnw[fßۗxn=94>qNaC?#=X WlJFDrXbJoHQTcs(PVݑ3P(?<^B e j'צ&jwJQj5D΢H- 5g:"5xFzo˹ur )] 9\顭1Xj͠XzpR*GfZudtfE5<80 5F.0CE@]֢v/x3 6zB똑^D<Dgdr5,ߛr2o\r6A/D}{V qD[.^1 M+𖕕hh+P- eY,*eViԳm蘚yOx2qΛsZxIݤGPdjq28(v>RqYAc1j6)A=ӗNO }zi̧Ɵ OvW%>EDvȋ`HFi81G/ RbQヿ4d}"9qC5&RǮʌ2#@;(u*pw+l)P 3o[_#)P|%/זd5^Tes=b?w/ύkWLG ~ /075>]@\Rj]Dj.dSPˡ- 9TQ bt'd|p|}0)E{AO?->ou3Ql"YcDrNH%D#T.FГzm9fQf) M~*=OG`JtfƢˠ0$Aaa>z?ӸRXX ՟$(u x.sq{K0:>ZTpi[- CBu!CPSuI#o}6I!!aC=4.,")<"q#. >5*HHEJJQ16FkW7XGsAvđښols/uR d!6cCQPyCQ@E- hYt,E,eQ,m:uG.Cˢ<$f*|(ա@Nl)j)E-)t)9li#%C-)ԳrKZnIxK gBy]qYwPmDhlFd N Npel]Gp' 3jaJ&9\0 ^$4# ̟P#9 PT(`:8J,BRq"$zh/⑝gèCٻHncWzD*~,/p{`l[4юﷺGmiv%VwY3yp2.:UE[*-(Vdwh9E;4_;UܩNw#m"owHfedLE~Nc*Tqe3TL@i}A@B}v13fr32'[wS_2f56$B?8p#Z4DL {͊nle.Fԥ{K؂b&`:UFu‚ben//Ov|AoM>uhvߚ;8PPWGԀN|gQ/ĥAf5Z2^O6/ȬQf/P_՛iYDJ eP+(Ƅƶ9 2+AtDG^=Gw@Shu]kdwĀo7h-)z\|&<;mse-(pYUomYȵǕ *i9ho苍S$˨>F\FdY}ܣ`jq  3˔@cj,ܣyid]'01=!YmxpLV=YMZd:ZlYm=p\N:DAފ` 17^r`ʸW4.:yfr=(}lp(Bce-2!c0,uf3 }Goʊfd>#L'blSd j9doG}vA}a]:ovCA;!@qZ0eP5BfPk93;YFEB-(#"Vc2;Q;kUCBn%^{]EQGm!T8G9Kd0[@*j;>v> W{KY3T-rکPBEJQ,%XuU9(|2 wN&~wSŝ*8S3;\广ղ8nqgn|ZmY0CBm"ZQ+Sŝn2TS)BܡvR!p :C\UvS"GjL# wha~QjswSŝCqG<'\!|# >`u;UbYFF;E仜!4"Ws@36]n&2w9۾WPK2*dsU66 ]mL%dKņ , kԼI)x·ETp&MaƊXXYX;WQ1w!wlrgPb'P;7z^j@A{Ԁͷ<VAhh%"QM%4u+Ӳ_橕UT17s "j8 lG[mvcTefՄk\:Y\m jq tqIV)a:D'ZFyCdQ(ȟ!*v0 ǓY7ӣxh7󐈨 DY,iUjryHRd)Ҍy HdL 2wvU?vwur7Z y=}o%ހ{zs}Ot^^_~26+`Zuf"4h?%u3ۛ >?L̬o>ɫuv dUw@9 梈S="53P[i8գvcf>S QBSzWϾ/ɥXkcK ƶ& eeIXŬ*Y]h  H@Yq_c#ʈH2! j>N5+YdT{Ԏ|YȢFtӫoUl>ás /_.g|݄!G*]m30}q:-Ƌ}9=cܔw>Di:nvoaŷ4W?T ӯ~!gG!\n|bݿw+laXq\wv~k'̟={a}k+ӳT$j/$Lvsl 4ccwI$Jdy4΍<Ȝl ҥFng5gu`fJ!(("ږ!gQ5/S2Ys 7W\%O@C]MN_09n~KzZ6s{q}V7M?X?qiL6Nu톫56 >߾طE_`0$~4S[)<~p m x3=ϕ~W#rv0"0xk[8п^_͟vH_v7ƿ^=οǿrlz':D봉[SwRt~d) e؜ ޝ8-.ytuy}]CuK?juiJh0(]}_ ?fě/׫~sx,?]`.{+nwXnW[bMY lNzsٍ ~޼~~n{ N׽~ۭbw,d!wtd CYkVk\ghM\7o&6ދ 7%.7F<&MٛBq 2{P[˸C=!=jP[jzKMo-[ S^!#QkU Qiǫ\eedluvڪWF.EF{*#Wy!Yfh)wCDD${8oP,3‰"ȡ."j>Y%42j F}vZZ!V%*U 힄&q?.Um&@F}vZYSis͋f֔`mvcKg$"h`yGmI6/I\U}C.>Li%/6Pk%4F"h~舘J+mOt] 5LBm$WEf2v1Qis͋ftLBm(eWG LV\iYfr8m֎Ai]HɨM23Q;T\is͕6w6퓴n3fj"hNh$2A)\Ui"#F3Nb! ^8VŀatL`Gr݆wnո{g4k# Q Ii+% 9ȼi2ݻVE0Cjid̐D{ j-y|A|Ve=jU>[l.~.ٛGV^9stYtѡpD6qGp4~XMۑncE'wEbj[.w=4%xlB` p/ltWK;zlYڬzzĞZ*@e$%dN2=NvUe$BSXƧUf`~$!?~yclR6pҸ #%` UMXT21X0Q5p!2S j Z2SѵsGz4ˈLfףVۉHFz5Ez\9v.rzLCmj;U~}ԂzP=U^TC檂fRϒ&O%L(ϒf4!"4wj4": <: $N0P,,CfHסRԒ0TZWդ4$S={I$j=3LGm5.+2ה=3ZS~*2=AZ ~ZS2WU_j+BP"v"%"jiGw3Ǎ4nlϙ; jF3w4ԜCp[c{w>_b 7e@K/Z*P1J6dFw rdy[,mַl2h/kuqc7mt*=Y ç;yY{,c1M2tE$B Y"{0y>Z8_(eM!EGVƦyU$>D@d@,V2h\덅F؉6Aۍ5`8~\cYɤVFj]NEDl: e$Q6Y@T*UJB,b p&a0ڤ/XW'WG4ԛ7'+4ͫFB* [ >7S,ۣvTZrL55f8Gylj]o~Ibffw2 5 oٔvLf՛-OO㠈XjeIyXJɉ]>g03MIN)^zu0ܥU7e;m! 4Q%B ;j$[l0N&W}6djj9Yc48Sw]<E ;d}ڎ0(pB}}Iz c'O/ 7,l]&2W\Zz tֻ Zp&H= ]A[D bd µR dF* |tYiy j3 L4sLHTIG\Z/#s^k(Y|`B^j kuZ QvN]"M)QZmd5QZmd5QZm6 NCmH3GHj9uLA-.|HB<ՉMAH~5+t]]ʷ#"L#.ޢs0)#]CeVk&BRc–<)rP8m 6BAH)4PPcm 4ԣ̤|sIzf0+iJ>W@qRW"LȼPA . c"U=jXm?Y,ϼA5{>V\%O@C]M-NQ\lW~wݜ5noۋ˴zG:B#Ν?}[0^md}}?ޞ|_h KL;hN +OyH-6~_vvm*5oB:}+Ls%GG>2ˆ 8խIzo䏫q紻7Atzᑫdvgכ85B "JuD۩Ի@pzG :}?Y2醲lοv{Nnz{V6)q5\&8mFo0˥޷?}\]߹:c/*9lmt{aQ,ƈq" DPLm9xSSTfZnJWnM() l+_hbr>K hemӇ2oNBT`7+62qzI(ءظS6TTt/hk]?6}0뾘f/D+fjy~w8^x9F>Vf^vb.eu]ly]0F~:Cv玼yCe4ʫ65YbݎV nJbG[Z~֪d9+j9^ ؟bu\gN^@^S)D]E;OLB-\HLDP8ԤsUT6vŲK]$ ;\|ds2OwL0x>4˪ T-LʺZ29288\N9]Ï RX vSp,{r;پþ:S1aZ"NZz'1x"X`wc%%ua7vύTiG]neC.{_rzܰRͬ;+8 .2{ ̉1駰E3i mϐ&i"[ب:ic5j 2j?`ʽ2SF2Ԝ'l1׎!FU-' urt[|%߹EE;沑˭A./x卹W%P3[~ M۷r~']*UK>$ =uHoهjd_PqK+-a(bHiÜa]Ղsa5a#L>; :bb8Ժ눉ނ5Xk]j"5&iL!C֨Rj"WM/pum20ʂyj ["PhiD.ȉGp5a$sNH9q:X]2x$ރC 9?RN\ހ'7IAqHԊc@#D 0bQcZ `ĢVH&7;nPvr`F`Ƴ``Ա,I0PQI00; Q9q4p`ġB'F) R g ŏ:q{ ׶Ur,JZVršIfJI6)KS/`1ހm {qvj"0ǜG>AI?51GRe!(Lֺ,bjEdH\$ZC J"$BP_좖DڨZ̡ZJBJ]IJ EDDh!l vE=;"'ǜJD&_Qأq1$Ռ%B͈I <R騣hHfX|w'5W=n U/)&zAD4Ct8Ժ7r "S; "V3t?Gm)N1b?Kn3#R_M&b#"I(3)2siR5Ow+ BUŜD[ ceՓJ$QZqҰpՠ|n=sK)*Ȣ=플}_t{ 1R WN + M +uYdš$^H*^mp.Y +d}%YT*u1F~Rd5eMT`&/JD+P ga%LjZigHE aA$IM?{PMޱDs,GɄSBpT!nR:Xb^Qw(EiQN6BO6B-XǺ۱j)(Pˮ]C{"Уg2{٣=e+Rd?)IIpVY.=d |Nw8p |j56p !%a>>pվrrG $#KL]YqS-ӂ8Joa-q (S*Z0z 笼CwM?!0j+&]vhc`_{$Y4 GB5iۻ/j9ߛؽP4:C-1ɧ`>w &`"H':( uqa8$ |"Qwm'I!HZ %>'2_3L FFHWI87?).8Z%DIJHV*kj^ YZ++vzϯ!T2)q;7PmƠVQ:H:),gZ\-le +x>$*MPKӨ43֨Εx|ݻqYXg%XX9gB$hcmR# TtP4Lm9x),*3Uh0?{\Y'RdZ$L" fT"\dr"yMf`=5h0?5$`~,juРh"+wQ5C-sF`~?`><Y̧2`~$jL .j-D` u*pOmE-N xq.j8Z=봀qI/PK^Df"E$jɓ^`vV4jx ,z<z(ԚDcA-c5/ ^@ӎ/"Q'Qyqj-r"/ š F^ġ\%C-1J"x$N"xAS^ jEI xAqB1TH"xZ2DcAȑFxI8$P_VH/r8K"xZ D"zE$jFHs`^]{lg`aкJdk')}XVʝIpqV1] (f>ybNW׫-d #do΀}RwńQ*4`cPzT]8mrn"q97~C/'-P_vG]³$􇗯~q,w+7]~EFaK30Ӌ)1-_)K%˳dM7؏%![3_l=\cJtVxk֐^"K`n"}z⯿^ۅK8ãKn'&E&宮Fv2?Ŧ%8#h!WBE5-b`3̒zrIR!_К$u4y i8-#bt=y06=ςY1x\Z*i>^@| D'򑸑^巜%*V}obUG8f ՉP0Ϛ) S!BH=8"-CDȘ"EZݾC~v]ڹX7`-Y\5VghD_xq97?S^7r[/o\m|2U1CvLKũT>kð&!!1kV>6Xl=1_0nBE/:+| l(b~ 6 Y6,,]KL頁'k[U )]ɑ eLj{751%3Jc#T?tE{, G "mrA}gwj,Z'D½ ^T$'vySy#9km>uVGLZWŬ?7r^g\Iuq%IԑlZ\GȏWG+՗3JE5%)#usJ0{VDaJ]aP )xC8}Cob{L 5) NR[JN ĥ5$_٫ oI\Z ЏP#)٣n7FOGjII=Ԋ垎qbM!eitġ]T ZHDOG$jC8658t?{W۸lVrW?m`7A/Yfƈ ﷚=,M5ER63%TXUgz:¡+?#"dV?[@=,]e^XDx) 7ZSB(dvǛV\ضRR3 0 Iḅw/ 34ZoZ_b#󟘺2F.{YX=/4UTzٺO(Wi o_ퟚX4뻸>YD>ﯾP Lq|仰 62NO?p[>}/'g3egKw7[D@/a>ѓUyq84Wہ~WG+I'V^S a < 4>UX4n-aeIBZ%c h%y ԗrh0t @#ߛxl ߃5DG H{|\hlFU?-fHd[/f?^?zuS|NJO<ݕ~OfhfsՅg٩]xMϨtQ0#{6N]n~w-n i&歾DkZVuOQ<oMr/'h  'K4y#}7 d;w@;h@9h!%lϲlH Ykc״Xl߮R2pJc["5\k`{KFH.^H6 W1H|׬ *?YmTZw FM]~ vn=TЯQ/+oA=jmx~r4CԩXzG>PWB=Y-k6^zV  UoA;nY6ލ[.){6m+Ԩ޼[~XwB>}һu-1bO\ _=<#5vB>nؔJ)J+"[qYg~/Fy}FkEQwv?*NW>ޛ׉ױAXK؎VR+opc;^Ͷil4c;Ib<#RjEM/FQӨFՉgJT@FZ}Sp h+޷Qm6 FZþ;\^b]jE4*`jםLt}1oTP Y+~JZS5A{⸤qSvRJqRMmP+l( 1v%p9+~X8bVRKUZI5B&FQj|6DST1ESTD> aluG; oy? C`F4O~!ϊ!s^/e0JoFtH5S)̵ &Gd4/i^FӼlfx%QN (M F FQ Z 忠 bW[\x*+sXso5ktw[zIGJ)qh"tRIhl4Ė'%13k[$HHV"}`兦ŮRm!`\%OFFsӌX"l@1h8- \Ym a,! hYOeAu!,S90Jo hmVFЀ/nmO hSڡ BP!m'=.p  b!0T$J.6TR_JWx Yn%:>"mQhVqnTb 03rbCeUVRru oU {ٺOƤu-~Wjvc?h >+8}%Zx`.}>.'ߡQrYNO?pѧuT׳Mp\||ޯR )3 =yySz9Y (_ԺA65 O‡nf *I[9tKlثCPUqcקJǫ.^.U:uOKAT1uh좸j[@),oгv]6h׻nԂFx o/SJA@ P%&ȩӸᅄk̅0Gh.{ȹuIrB EW+nSֆ}ZҔ\1'*kA!LYP6F t^Ӽ䑁.dz'82(SKXy%|Ǫh\WObADǾhuc]0^PsZ`EZxu)k:d.9ևj]&bS+`oI^d=̭hNG&-tQIп8~)tvڡ #@ehol0V{w<~p~ {fYOvz{zNŁ<ϔZ8 Xa>u:06"gval'ﻎ1偉z;y叝$4OO)_Kqt591f98JyUhxvKnӾ06W\gfutγݕ{j'|c R4Z Z~ C||`_I`A8o oO2/l;AJł!\ kJ+t|du! cKҁkOeYp`A}?h'ϗZ8N(JX\I0A[n[ύpYvIsz,. R"ŘK:Q"S'=#SM;HWpN?.ý.jȓ棠Io'r4魤d1`Ҥ(&5/^H-`Ih'&}_קTuBf;ϒQ!xr 8iɪuJ'Jڒ`PZ8Ϥ:HOKN F6Wq~f1'JW e 2ac (38C+pZ1R[sTqAģ`X# 9uJ^/`<(8oQE WZ WD 4=G ']w\yԺB6}TԊ!JK!)G[ƒY)TuO /(Τm#XfqOn[e\3PCy'QERb=PgGе3\<<A਀k<{ sHtBqh5ZPqWG@|М;E#,fPShlGpvg <*%RFo"7NP BT(|UXNmfr85Gu(8`L+5x-b)[7AX Å1Ds \!@PE3yarh\pr9$N3"=8DE;ўO @֠gy wS  :Z't2 TXYNn"C ,e㥰^+ &**7xDCVk1xx gc ֖. F VGS ^3oƊHJ1E7^YGD|=L 23X퇡5m}//Lr<^q N(T ]9=gDz3ı{&1UKP]Yo#Ir+^<3##k0]=~6yn (D́EwGj:Z]YY-VijZE~Ed<RtTIKJhb.20ڍ6eUImWM/AT9W1Ւ/)Q\(? IsPg*8;O$1ڈc"3taD#7NGVZfJG޻(tdCc庖C&(t Ik"`̘l1 Ȣ'bNRͽPzhȩ j̩d c$d9"=Xl DkM3'&{o:%X|Q` ^rK@4gt$oCxd`'y޼&s?(hݮmRK1;X6`_ԭnAdOycuG}5p8k>D5ƵE|n?v270* &  #<*gw)ZRG<T.,w,H vrEkE cbFg+RVFXr4ǣ5GqW ;\ j'f;l8^w2t4feBԠ8 qj7E~hCl:ₚDcfC<bd݌Td5jPa de+1IJPIE>䄉;-uhAY JC'A`m>rpUB6,3pw994 5X1ryK$Kg1Щb1$[ZE-hC-)W"o3|$U:pY>b\Ta~kjB \a[TG%wP./?'L e29%XҷzFIîuWFwW\ݗ ӟwgu|Lvjf_v3-gOPg=ӸJD'۵$$ goKH,I)Wy @is+gdiy.P4Y0 3% aQ9•A:&INEiMpM3kyǶI 9QLm @=1@+ c]Y"ͦdoj\D&ǑvSiZ`RX"By-n-Q?B_t7¸|)tHd5Q8cdRx8̅?kD 'ڕnh_NjQxUfjgtjlgt2fj%,kYĖq%*>*e(C}sr:) 4 (ԡ}RZ-D;QG `]D(A*> fXl[;LcfU>P' k\$nY=04gP"y͓ HZ239mэuS\Mv}r̚ @AJX2e󶯙YHJd-4GemSNs\mZ˛ Ԗ΅;{W rS8?;;_o:7ZΥۜKkYZ\w~K>HOf/'!]ww c*/׉dur><](eɉ/Qtq#S&,-xotmݩ[~--]!-O}J9~D݊dKߟϯO_6tnn~Xշ?{nN/>y{{{p7iulvMqʮ/-7F,Ye֭f PLgpb]'5C:^Z;l% I }3H jMAwQ7ߚ-VRd8!8?FcN@CU8>Ph 8X.^ִۄ?]\tuzn؛3>"/dA_F;)>ڝ 2Be^ [<%zWeZ_o.7=mtop+F͇N7~vOb._^q/S*a,y]Qe[܊b]Nv-껣OQ/GsP%%K,= &x$(w ໣')='sYHxtZ" Rl">M<^V aIuh-qә[95(=~5tqwnO=N H).OGB3" e0֚H-I3)-1d\a*uT}Doެ]h񴙺9Nl9=]V9=@'- +wlI\]u{̹?.ɖ놓<ޢ!dcnN1֛j `rL/V0cO+rшgO_To(_WM5E*Ԛ1pߊUc mh6/ֆVv' 8 1L[y{^5{z$ v"QoE0 J #XɅvK(_BZ:[Byמ=ZCJ pԭL-WR[7=p:зjPCMrE:ԽKnU+;Ί+ {wsڹUܪvnU[ڷS gH(wWQkGMv(\fhȧ )7܊SRv.["l7'[48_\u}rQK~Zvj-:r0,Q1zԢ6b =G*QKy#q92{#;UAϑjʌ sd9B=G*a #57rɵE_Y\i9#yH3+3iơ5sQK1Q?B9sv eَ{nia#HIx#9%a) -nZʷ CJB%j˅AJB5ΐ) (RQ-v4%yPb|NIS攄9%ᏒUGJ9sjw:Զ5Wϼc( c,?+"#Ϛ9#OOdq]2<#Z'92Б4*E`ALw(D< O>`f!iy.9&rIڇj x&?IP:l@-JI y4Q{R8Gfl4#F Qp'9qy6IMaVz&4 ;㇨%Z=+~ZQ7;W6]_}~qШ8;h U9lcKixsh pk,1L8+;֨pBZ9XDٝ|DFq8DT\#2URRLRjcꤦ8R PCzz!*҂ڪ1fsC:W6 j+oPa _j`+Io큏*B-`BotlCm^h||ǫ:>cU8pn't ߢ&(դ8UXzrTKrQZG ܓ'nQ[l1{r=9(=9cP+iP$mS˔B.-R/YREi)BU,jI==yN4Xb]Q8"_+JT-S8p2C|tg^ZsD.C6uibR TG[PkN8E`8\v61*UTvUvڢcHx(6'ɰ/ +D=%|2uPGڼDk6^At(p(&v }\hWYFKYx&ήgZ8B]nAXwX%iVo5)CRg;aY⨧WUV5ǂG5x# L\s|wR4{<-GJt9R:g'v[rUP- k )QE5oTyS}K5Ҍ}cjN‹a rxj`cXITRC0jbQDo h@ 9(x(16ޯq5\2xW)ؖňz!a뫳9}q: wWgw^޾}NA0E@*yj,}?7kuMFOauki4[F)VC"mcbuTK1 )VbC0db*xR Cqhw J>_֛REEFxJA:g#l^*TTQݾA%҉궝zF87j,2F_P/U!5ȕڴ@zΕڴ.slԨAQ Dlš'sc=ʎc++[!5E停l<OSS& h( LABPp.Fc?ZEsdUTC} 6\i  Z[W$G=y{N0֔ "j<. +p"Ր*lL$$ RΞ!)Wa>IN XxRI gq8fFa %T- Z:@;HQ%HOgEF PD `ʹ%I2-5.TBBDd$<E*"3A@ND۾]Dͼ1A Y} AI;+Cp&p҂&{)>֗&I.βaJ$jU7N{jxVCm=?O.T3ҨDPZ@XeֆITUyX5V+VPJǖlFj+|W>TKmچ?}VAѺm/`˩:5\UTHi:)(% o\ci\uAts:% JQgDH քѱDX"d,R["B8@\UTCN}xn[oo80\UTΆ-?a;xGqB.gsب}h3ׯ&뻥^VVۛK<}D}W `\%#}y&{gWރy\=Ǣxusn25DlA5^^yqq3sFX7gՏ諬ۤT'Cv ~'}GٸX}`/ų{wa#Gx^aB(:(m< fLP楍Zf h%edY텟q dZB$p'H4@sBe4wml.~,i5SsI5 0+OOS"N3I h 㻐p!`,d)We' (osBX2H N2:.Yo$0Wca =ifDJҀBğjut/;2ҟB4v#"h(5Z1cY˅c,9JtTINQډuJN-' i35<>%oyb!s^0nm׮Or('{diXu}m=Q (zcFs~3n~xi$ɭ??AĠ3%iXK%EĿp":ÈY9UN) QxʒbʨDe ]Ka+>elEJђH`D4Q"+Mr5FK J4xƜ-%*y+.h?vsi{E@s@)Nֿ;<;9yFDʹػz3hލT4q|lP#_JZ/-[yS dj/ކj_T_QݬB>M;i-vc;߬:V(CsuFIu?-URWbZ@j|iM{;@wgIڅ蘒w3˧'♼J*ͣW'pQ*,upu0_>Hpۍ=<|94'[pξvN'm\nMlo{ O e6u}go/Ϟ=Qg~wdf'q $|q9j9Z=ξm'"KmA5%>KR}ؾ{kTqM671uXK/}Ogy":$txK% 1?U_WWu+qK W&˷Wd>)fuv]&GnpO/'6닿]"^,N$xH񘓐vG=@v]G /죚j֖ѹ1KT>Ǿv"fryJN¼qF98FHTpd5l9#@x:;./y眗p~7``[7M?CWgYD#Vx[:Zz嫂h-F{y]?sPsP3Necπ )7-Qҙ fPOD}~0^ 5`Κy9T ,I oMSjSȕS&\nscB9XJm3T҉2DhSp pJ&zZkU{IMŒ*iOEJNTKpqeŤV\pJJzEԌ:FL#v{d: 0/Agq:l輋삚$$Q]?CIֻrP`0Bk\3]dՅjhKug!ZN.ՂAg{iw%`vj}IIw&f sS3 _w9-H&7y5C/$f ZMrKuS= \;R-jۙTq#IT[t2K'$Ah3tZm4"畠L義4 bHῊ) ]u!zFD=cQBa(GUT˶p}+GP9T+$=*Gr4*GuQ`h?f\ 7%p^N5)l ~,Ɱ'$..&ES0#n*v.qίMlNhA7K|ͼ}!f|u|YΧןw.yb]PzF+Z%_F HTָQD{KP !EcjjJ,pȂw&EtQI:UG *YUG 2p1M}7*'a/jP B:Zeh4Y*d|(Q ߟEBJ[-4hU-&H Jĩж MEqw(ә:Q\4\&'JEsʂԨN8@M@%BZXE%ͩ#Z4Ee3+NO6ļ@tPDӉj6hΏ^v&%PCƦ('e$:'3ECm _QM(JZa -$h%L.TKƎ t媝玾v KBeV +K!!GU 9u0w}6nROO{ 04z>˪j'޵5q#2$K+9b;qf'O*4fHiHJ203"!ؒF[X"[VHJU"" ,W KL SQƑziHsGz+qwEԛq?(Hc7b(p$ yFyR"5̔Z莴ZwE{a>EM1e_{ǡ՛РLcBu4)yEs4L`SJk pw}J xajQRG/ev^Y f vQe,l16T :7JL3oM/(8 Z2ZɉCނPCǂP q"*몫S3%\ Ykj/'wQ9t \JiIYRBs&RH\yAl3pExZVrq#*/Y5]eľ̥MJVֵٞ_36;~-o(k0=tC_vrSfUzffbe2uuksb!Z殚o=csS׹Hxn[lMeSr UNitߺsdhGu#ĺTr0{m UNi,8Gn}e:}bݎyw֭@}h/::g!UR9AУLSG>\iأy=i0OM bi\=bQ^t%L G4ו.L pR)ܺSZ]1esNZx !`GRD) O&J(ŪK r/L]^+!eQD_߱<c DG <#MG1xÍb}5;Iߚ" G0ԢkoJa5#!绨q 5jEM|Jn'yIm-ȡ/5+Xq;9M?r8t*k<[4z^ 9e>2?.`~=?W͋8Zx2{xWC#aDKJ6H)artp9|Z{~)k ?>%0#{3B[7"ofK3j_gq#?rMg'`&PL5pT1U-:e^?oUkɟZ2 Ӏdm(O3X&=ί&K&ADANtRq³!_w 1:ڬ_3rEߌz;Zm.__oLWk/7[fy-6x\/LM*QHPKԹ4R 0Oiyx%BSVE聉mu5@֥ yYpfXαd?219XE% <' )bJoK3%EIaN6]..W;ATq5Dfw<۹)CnRx{o{XM)WW5wxF7F giOa;n[KwsBJ\S4`k'-xN<q߆3T/٫vQ^f9M(w/^,_,&oWV0SZN.'ˉ]vOJAO2o'|r8)݇pǔH> ^r󊏘lxfRmK!}BѪB9㷇@?l lwԣi8sEy;Xf"bas 4+lg?uݼ,Ϧ}ɽ׃.K!h[f+^xC;X;pPՌ'Gϝ>Ͽ].5[AO38"SQ*3RZp儊cp6c7D֊Y(q|jя`~>3Ӫ30:򲴋woy1aTb,,Z4iNC2_m=;C\ԝܥAikXHI6?&o+M[ağlRYVS<] Z(1 F-U%Q+u%BQcI"( t3aT@$P* J}%̺Bho_;p1kl`oojgenj(gn{pIO#} H_8GHP'k$ A Z lQUEI)Jk\ϔ@ٿx^sp:m2\ m4{Y3KZŸ~Wm^ :8Z;vH{JD:a~{^u?& :I»V5_bulث9g ߯^h>6 !#zY|uAŔ`BEP1a绨%4ULIST1%hM) }QJ1Hj_yTsGүc~*VaM+r{z#մx؍o(CU^X&Q\jϛ'5ysr|}񻹭jrNydX]^zM!yTSu?4a͠-ۇfk=g*7cvٴ.vy y^;/r8v[?)_ wU `]pTgфʊu?hO+|3@ۖԓx=T#F:OֳRn~Aީ6!emo…q dIm%Zڒ.ٷv=Lj;\r[831N~t]|L.K/o)\L@t]+.?su2|bV5*[o/Y'CM%oxֲm&Z6A./7!BDP35Fw fqNL.1Һ呇3.۵ >{[A1xFQ J*J{ߧw-?ʹb8ms$ D) E-_V i`ꗕe~Y]eg4YԦWv/0D1ہlV $X}UH, [2",%.WjUAPis&5A&=X \YcrbpO)*APԴ}`Bvzh?E-FHyjkND딧ըH}`:0\&'&JGa{%CLt=7q $ZjE9fI0(J"ha%GQDÞjԊT-DK%RI/$LQJDQ' (=>E-V|O*s=z)ؓJ%-zHMK֌>C5[EG3AF HAd>`5FIֻG +=TU [ig *b TXA"(1A8v Ԛ8v3>IiP ">=>nh JpeSA']'?-Mʺ2)4d" ?%((Tą!Cg/T￝ff_f:d¹pZ-6c9=Zs=澤@HN wmmr*>8*'ub/yI\C<$h[?=$hzv ΃3$mL_6,7W{jaY"fPբB{rB 9Hft'1P6֑'1P{75$&HDPhMbILyDg{fD 8ܳ"BqE4PK͋hš)*GDԖg!PC(5RC(5*n)0lBq]1!t6j55RCBEbG03'îTS"--ceհjZܭsKuH!,D46xKN›J>T@,7O;Ame$ QeGdje`1j!>⧚XLv&&>7[kP pe\Xecy+!d:&*cpWʶ;}g+0:҉+äs4lЍ  +GU#dm4=& w3;&%8mQ:6 lȎ 5c {j P[cpl{Nct9U|ة&L~.㿨,o༚:,vL4˅SE`Y-L(t/SyhRj[/U݈<E V6TV R-&&͎Ҍ>BoH 86 U6 j)s:c;?(*0 Aeff:HG=A DzdF{?OK ͏-ggrV᭓.iSPF禫6"(6wFFBHb[j:aȈ!#iDUDbCm.bh5#MmF>4dA[J,MX6`] &aFV!9x- gpdXy,;Wߟ‰Iɨ|PRƂO"BǎF1Ў˅}ik2f;ρhgYYܗ1شu,hQ?2v͂%TELáF1N&v(ˮktG6N&Jv3Wh%j'rGŏJ2K#}L)y㦈J2٢hV:h%hSmMhVHSBJLj!3D('B9*qRU[D^D\s86Cm9ϩxWQS)>Ď=ڔ-c?,1֜N3lQ;5w0֜N,XZ"(VN1 8i3Yh+/Z Ol)0a2D-@ i2Ԭ nk jF4y 3j5&():=D+H E:T <, Y2IܡV^)KS'gfeN&ad ;uYVuiI8֗11e%:xBf%-hIy$Xv%D׽4Rt{ƢxNDws N&)s?jm92y R&:x$hR 9_|ډn׿|L338k}u~`΃k ?j/%\.;U|KW˓/:p`ޑwgI_ޝ{_:?18B| ?$jϔ~1G1O? ƴ|]a_ʦ\Gŋ{W#I/qJZ+oE )8 >F xK2GPZ?F='lAYx/*ԩeZQ4؊zik0\EZi}(G#ZW*V9F܂HHibϧI%DQ[!-靲NXkoE4PkQFE4P[hvXmQwG " B " 4ļ8ב({*I󽐸Z3"sCkr)֌|!QB欗xi̓^Z3JHFd-Դ`cv\i.t?\ɽtOᙤPG`ZjYA&NK[E `@6 $jԒ0 `.N"*PPCMDN;֟*I*DbWP3p|oqސDRk)kXk|-sdj2H.R2 JQ/*]%(U/=JOJ2IPFb >ӎ{7;%9g !yYjZ6+ 0!ܩ#Ӻp Eк.օC=wuZ"h8i,IF-5gGW$2>Y2ڸCbG؉dhӪʙ"Xq=Et"Qk[D c RGf: ^k굦^^kaMi F\VL1@֦>w6"Fi' 1`ƀ#d;HpYkQڱ\ 8C! 1`ǀ-5DE-Moe1k'͂=,"C%+"  jRp{uݖ62 ZJS Z2VPk!Q{Y ڸ2(l( 0Da# Q؈Fc)fk}6j-XĀC . |"9A(rJ,N)ީ2+8sVTbIED J(*q*&0J c:ǥP\'xdJJߡF%(D ȣ}wq-؊C >T};Tl=\U'; v-39A:ѥm (CmrF Q2 QXDr?2%r[%@' 'JQ@3%@'`*ncsU%ښ2(8ΕA ЉZԖ;%(*852%:{ȔbL)GPif(Cm/CD{2tƯ%(D ȣ)Τp}:jO]J0\-}?/o/nt^~~kﴩ6մ po֫ qn|/uV/ZEuM9hȩ5VJ5SZ3Lϣs;qpywƶ= _?/]߂x#7_xx/Vyf_as]]M/mӇWN&˸v x {ێր(4WawVK=A'vg=%*ۏMsCӁxsr(fo.lWj}\%~>L);AS[A0>/ 3&gڊxZZzn{@5*qlںI; ުޝ۽r[7n ww\-OyʒL*a岄QwP[k ETQ;%[M[ |$3 ޣI<˨ T,j.KHG=A-=%P)z*:x yʔ0' Z+Y):W֣"MnwTR$+FmTۨ"E)HD$Rё"Qj߃ۤsoQ(ٿv?~1]W D߼f҄9W3 ;ׄ&s/֤X]fk 꼭k07j,.xW}RW"t =jO.ah TdY,jYdY,j/JX|58ՎȲD%lYVr"8۾lGC05ZzT~NE(rDˢ%R*Bͤ9VuKo&o>i\W7.vcl"@B76<_2J>\* *4pXh"%\[3Bk0z!^3U%[D{rܵȝ9r0C."ġ*SփԎ"BX}M)}ZytA.(P{9_̴& rxJ۞?&lv$j sx^VeN۹OSZEajta5Aˇ(D~jʰiwJ'EPqysTѨs G P܅#S-ܸa$YH#Sqeno4ޱ7nQZ2PwP;A\(ME̢4Y!RupPu9|]"q3eFx" "$=.Ynn_E-.< ,I"%D4xC ܹnp Peɱ(Ԇ2njQ!.#NҢ7;<<#{**Q*5*EPTEb%>KE(ḌsD3Py잖4羆pTIUX.W[T+N>4EZmN#8Ϝ.PR/ 5eرD6TxcdȎ=3V{_F&|Pg/$Mh#nP6 sm 8.78r[Ld *Ԗ@m kKMR|^5QF~˗/O&7u8;V-›=xp7WU^\jsnW'?#o#?بj nuD8p?6$A1H 0kZб 禕`]2Fs^S#qyk/\xAE?]V'^W?|6f"7<9u@7BG.Yn'Rڼ-/"ouJs?j:Zs/ۀ1M9UGN׿g}-c"4/aQ 4ȥ*Ӫ n*jɥk=ve߂%-:U1sjpqy=mVi/BzY'!uz3k~9h+ ܛ>ћ kVToݸoLLJ~/Q+ۛۺ@Z7@Rڏ,uoCqo<@;Θɝ1r> gJсC-T4%R>uS~^>NExWV j8+c++c</[ZYD$YfnIꖤnI% )#{shM"X33#'{j}7ZZ&*͗ogWπk+w͊[LNn C)=|f!8[Ix-`w_OJjzJ ` S/z#޻.j0@` s .wd%"69C$OC'9Jj(P$#}2L<4D<@O9o' (9P+h C EA̐I5'"5ͦ٣5͆ND"HdG#F.rݹ+a/>OwxunVgG[? / |<5.#3ceؔ7'ھv][0 뾇zw KaZ/oݩfZlכ zze;!C-.!v `:*6yYϴW0F2L-uu#rf$RPOQcR)w*NJgwӛ\Z5"#r$RP:R0Ԧ7iٸܓEhTʝJS)wP-Lybt+mޖFSm}swonR2%(-q[mVێ ,bu=u< t8?}p!ړ@hr'cSv 0zf*PgA>;-zfF!=ij"`z-$FZu`{#.L~~XI˯17jiѫ-fqßp]rf5UM}n\ij=uX:/N` Η5%HfvSf P,nOЄO 'nc|XS(yKÍm |{h"i{l6SB%z@*q9'].*-ťd,pӨB3K*m*fjC֎g uJ i')u 8BPK!zYԒZ7ԪGq3:sƨ~tmoXّZNLJ0qdE'jwRd|oy{"MvFt4o&5)HF;8_ C-z&\oGԓ[5%/=ul{2'R! RqX Ԍ8bXgcTZ&b߷UϞ) ]DxZ6 Qq4ނg/ МR ыȫ# WRSѻЖQ pʉ:uJ iQ4S(Ї)0JҨy\cIQZk7[kJA؂[ ;LpfQZ6l U.-uQ+\MU0TšqZY q5ԚyzvжLǔWem?՛n hP_r+_zuK&:H?b78WYyoN<܋ê /ɞ{}r޾כ :loFEw;bt;lsˋ@:_]]>rP 8h*W]hYvlgqQ%Iⵕj{$a"P J5X"`O&s,p~"g?U6&5NRj:ARTxJzt,DjJ }No!U_JpFKq5Qn WÏ=' ZP>@G `ֳ qؽ Ժ/#v/2\ۢƕ-Kv/ٽ>vϳ~x8F&v DWLM)-jӢc"v|LbG,v.1 54 bG,PKc"v<#(=r!QMM#42Tj DMDSa5cQMGT"JDShI@{*v8(:xPk^2rOj<>9^:xRKnL *IjI LCc¨-26 @ l%JU%\XL7%pZݪ] ! B ɬ5Ay$mB9IF,dz1ID.*cE=MjF\O$yt$&Ӥz43d["⨧ BJ4A3ޡFiR=igƌ;zd}4F΢}GECm"ڐG΢EyFtQ))~{n;!QVDQ='#j%9SVOܳݒC8Ѳ=^=RT3RFK5Y+M)Z؀]utɤeeš$E;R")(7˳-j|S7˞evշhl]I#Тub:-LH"( L^@lPEq/S>']놉RjoTKRSZVEYjgaj:@ R,gܓ~6*,d@ 5;fBzYڡ<ҧB 2HarH}I8,-ʧBA-Wż=w(ne.5V3u-f+e%q9 _?ͧWֽ.q9(|x7o%AwN(99zM9EIeTw\'7n'K[b^௵j1]+7q.odr데HZmaًfVm}.ukeu9 S_#"[_yZ8.W7?jWٿ5Ԯq{e}r;mHm^NݵjvwtyN|+uAh+Dgd޾Ȯ.4 -Qo/o2ylgno7nlxZzETG@ZGn CK D *MH D(Hs@9*PԚCE SCQcpArjTybVeU jjgVcj 45*xVYO /XJȬ»{d_5YEeP5w(U>[ pn .v)7.x+ԀnuBQd@x/!eQifcS2-%Ӿ!&5Ii Q7ۍ}'tޙ_ՏJb"ơz_ M1hzC#:TELCBe?89o!=H@D WIUY E ]ڛgIP~c UU];0KlrxI˃SBH(1DF7UgAa`[i[Kbq-\LVZt*Hdb0=޼)RW(ڔ̕39|dR+d,t@휂-Py&S`=7Hj8N\CP+ַjg;xc A{اn$I[.LRpGyft d+{8STh93ɚBMrha)𑥮`,Rju'D6A"0t8𘼤QbC)z.w}wmGW |Y B/m f/@Kٜ%}#dus*r3a?X&[SQq=Aejn{ߧyPAZh umMjy;GݭSnuEYHԎ *,({j#=1ʈQq(Xr%yNg.oϮ.jgkr̮.$g̋.^H32xm%5MKj]rB+rC2ي=u6/w ;ᆪT%8K_2Y|y2Zt"j ٨\DmEjG~zڄnii8iJx E AZ\á6e+DhJ1K5jQP:!yCļ NZH< c^s^뾉XCu CJpR:b!@ziWҢ @sW~ƨ)-EUPTAQŞ1ĄJYS45dELP\p8Ԏ"C>D >4M4MÍV%<"h8Ԟ"h^B! h"4 brwY2#XDK^dCx$X<PsLdB$"LmHbe@htRq EPEb=cD&*6Q-|♊[E^8_FR"e-zeʗ:_,LI672+"Q;[D? Zj]ZIQD :W,á֬=86׆,L"¡v^D!j#DD$L# ق O@L@^j)^E\А_H'C3rU;٘FX8M@ReYgX#!{m'8+KK#j_Z^ {ϕև # 91rbyN,q-T+o3]ڰg;9s c=Xmd"8omk9x:NzpE\>Cj^Ȃ-u=yV+:p-& r tiWZ[!u.˴(ԾEԎ\ sP\OjcdgvTOGΩ\Dzr[ QEޑs=„wj DsKw" j)[qW]%^:Aw"?]%Wf|{5„wjW !@Dx'UbT(W V%jmXqQO;)D, b ҉*qO qƨ":(26"Q2:8Ԏ":HԹ wZUkC[JV~-VXlXק]Vj )RV$jNJP²VXJ/JVG#ueΚaJ?9ȓ)a7 fMJ0{C*%4Z [cl=d3H7 elġԬXj!'$rbȉ=[ؤ/TD H5Fщbէ-(Ԏq[F'ҡG)"H5&) -TCU[$q!$*TFH5Dj )!~n3$ qQ~LL) aQk_:;^^Emy LШ(^kM:u_J/^EsG4cQҢ=QQ{E=~S?''פbׁv6bE`&ز7o|]vm;Nj)E/BBWXJWX&>4[gv%L?z}B0煚nӒ 4m^ܙ{.,j+KؙԱ$ѻL.suڙz^t1`/=C`*цy s`: n7XgZ.aD0e]tp~#5rg8ھ 2Z}';0BȎ5rǁ^hȹ9 uq5tZ^[2[V"QjIcׇt%O7 )=|aDcn|a/-Aq&|M0s󕅙3(Ԏ1Us QsD5N*Cbs&9aV}p1 $"VcQ[ξyᙋPQ{e\\W2s!DU&}yj+-!D/:x[}ї̂x~H:rm 3IUv-7-kĽ\Á˼nkx{EAn:7VbO &|r{ztE\f.wGv'qx@dwʲ;zB4ׂ2.qVoD +KtBՉIm}m4XX{w/ٟ|ՠ_uM}mx ڽ~pIUO:` ƛP]llO3lǾ\bPoj۾?1p7 u$3t\v2{!߉xoTϝV; {5 %lfX5;V]T0]Tb^Gyp}s1x5"%܊ )& sœCOZ_yٻPgIaM$_9-.D{l8,Qa&6 Yڙ`sdC}~YRb"8Ԇ"HԹu.>ǡv̿HԹ,(~Z*j}*j=U,Jh,L{VU9*I+!O@4/C2"f+˞~в"@Í ƫ6׼ NIӵ^7Aa%8n{i )0%+S`15S!j#8NP8A*DWLp)< ėȥӅdPc9$j[Ɯ e1'#9j$М М2'#9RA'n/`+>kYNu ?efߜ]. hNDTo2h8wBc/Tyv[lΎ/^u#y*yo5jUyj3.|x$jq5C1OwW4 UYW?_mϜeO-g\E :>w#J6CNxN|d'AcI&9 7,oFPޟm79"PMKmDh'Z_cMg:k`T˄i5amYZYL)^ļ=綠]vG(q|nY>7qOW2+vMZW֩Db*UV(/dC\--8Vvi4Q]a4I4I4A'l38Բ qEPs1^ (Dy&*$R*[ĎjWʎXQ[E;~hQmN%"LTrg̗'/O{!"7{^gC&}| (%q-|8}z_~B:.d.1]~MCM-7eiz>ϖs&s5x@V׌&j{FZcoxQCl{!9d>օsj3EHԮi8fj8嬈: S H/P{)Rb$ɕ$ʓ͓Y{:^7Iw&Wt˽z(q qP;¡# /P{.b ZqνJqQqnQq2P{aӨ8g#N5&iFji4gTDOV;K`Ywӵ7:=VMP^hz,>? 16FpPKHPuFK.щjF#'FN,ljD\?ݗ0ڙ"6?PsDm˸1U<۾ vZhچmiƞdVʹWӂ~j//..7C;ܬ?bշ1PV /_ytrS6\Ëӏ _װ4`@ c~Q!HHV;@w}{& w+rMlo1J_7x#oRǻ7r'Zxyms~mz];u[euUXt3K}o6(tæuJ o4ho ZW9|mʿrF Ȫzs d{u+\]mǽ畀\ R3T?q2C ݌?k%.7jp'中'l|k&g;8q y{ݞ_7K=}#]. 2#Q{zQ"3un pQ"35hEѨU 7W?z}:޷LDf"2/lO8KsU2JhԾr414|c$j] `QOg>q"aA jX`XVHK`$ _J~A4JTĖ2Ce43$jnj-8f%D.yjr WکVn6|.?4߾n6v[烈{;} 6v>l.9*|hCBkR@:TC|Q0tٓ7񚒡_ބQVu T0"s51?XDEE4ߑ,C-+C-CYw w|;5ߩr"1ICiVnPOП.K 27f;/mIB!jS߰탮 EϮH<jmݗ{v<>oo{lE/8ia9l!T|2g*Vw;ӵ.ZrU%Z]k[qk*͉ DT ePDbZ 酵+d9z/.&۸o6S H"ⶋEs5[x`?iuُŴN PnKѫiN+ݵ=JAfut}øhUk }oM<@$q5"wfj +p.瑨P0%?ƷXwM9 li\rrvW.Jb=7Dd:<å hJsm|m6m[xj!]ԡmQ:_\U:=d Dڕ^[Oſ>}e04>rKvg8vcBxR;8#??-EO&2%̑ėӓp-KAeD1D}c)HٻF$W}9켏. ehԑMDjxXodQjD5;D%1]|^BDD$Y RHB,E&G~4")~vitn9G™K̮(Z)眳lF4Mp[MPJ0p h 00+JihBvɾqؕ优ءrYF{n^{n"QaܨfO\S]S}#QvX֯/Ƿ;z~1t|;הkkJeCvZ,v8Ԗ,v% oG|;ۉF* *HD`KC3:|&DiT3jt3!=A{oОgt>; lYpqu|jXԖ,8HRfG& >jR8'rLܪD˂BYpeߦL#pht1jx)GD8"u}L,YۛHԎe¡&u߸}d c}Dݗw<2 MĀx;&q1?pγ`:f$'#1HL3f:KA`U}!Q'7{Vco"~M:. _EV$j(PL.ԉ Td"+YYuTR˙XWkjW›6M/TQ4Cb~p5&\rtNkoC&]_BF I(fp4,T1^Y*jJtYyGBMwzPzuqNԎQ(`i#CH(j&~b 1Wo\L6ݺ ͢E}Dl{ZzIe?|xws:Tφuk#\'7 * a'q{}g2AL$Sy9-ঢ\ڪ΄WG1uCLoX7׭fqyCNfny^a~ + _zHvbaI@y'qaYwv}6(Ġ bT}|A׳8ewdS,r< wz6&7veuZ۝#M'ps}|[Z8/{uh DY'_X7".h?g?j Y'1."ow֫ Un/!pfo o,_TaZj=dBzyj;RXlTV6b.z|zu6e:?/omov[r](z z3[^iX*Z6UΉiyɨ/n/cb~y˛)O:vb‰;l6SQ^]_}in֫8iC6B$|0H=9Hy Z0cƂVGozRqIc@XfفBMb%c@H;k: E^pѡg~,/zL<>4@bN#FG{~Op4r(>ϛ߿#\}gu=8>9,E঴ue7A x7uј2+Tz)+.)U؃I4/gC BmE_RPjٷ:jkH|F3؅NۙmkjEk$j]PwO/)F+YtCmB {BӖ$%)(IAI zRP(~L+(K-qd[x^l'Qqc^?rrG.&vO9K@jPjOSuRG jgɸS$eؔv ~u 7+X&">+8a%,&r_3b8bo܉B `=|lj?&zrHui5wz;V[b *E1gڲw8j)%u' N'92MXԶh¢v}/Eh:zI-EhMw~%z=FȊgƢeF"mFr ګ HԖ 8j)mMlkb[\5OLR6'3O7OѨ̀~EGĄ 3=åc? ~/*D:k;HfQ}L"$j 1It2&jv:@ʋF3!XP`r{C pQ/Cj/%#?'EV8; _Y4V03Rp\󛵇 M6[S? YjgOAV̞LVd1;VLqDU[0^p0(qvU;5_O ~2L bهà -CP+zP~z@%5ItMkϳHiDffZoRA`_ gͣ_$zۡt5q(au02GcaάuʍsQҙQL,BTE^[4"hK˃w8Nr Z_ o!6yXQ6p(*J°F2/ZRyKŪF PKŲ#Q۾ӡ{jEu jLu Xn&'jF8QE"1-n{,4(v,48ԽI#k3#kM*hRALk A$NAw. : eCǡ6eCǡv^7^ۉNPWrG b=^+N%w9mSRjxvWBSaO#'ryqpT$"*m7T;/"~2'SҞ!lw Q2ԚA\(D- .<6Z]Fx#w7riU9h-q2^U4Rdhkz>(C-f|QAoVj| u"Eo<|c<7|c+EK[ZP9BԭEM-D \z E.L c;&{.ѵ/xzzvU*k_!tVVՏrA`~R]vx^Z hL0Bb cJ4h#V [&,3g2騫H2\T^2x`@41Ik:H桀Mah05I:# 94 =-10<(\oJ:s6FYQQl$cDZ#*e!㙔*J%XeS U EtJkc/4hiM`d=2 aZSR4`Xz NМ y]o&ͽ]|}/B<3HVب*$VN]aDRTEb|^^Owm/H NgQK <,2\~p&֬b. 5 )u62ڥ]s@GڼIG2NhRy! J[.&+ Ĩu{5̑='18H7QLL&V%#ыP\VAh뉁ZBgFJ4o~4f{Ό_-NbQ#W?/$Ty5EpKg :IEM0#t\@a&<+!ZUZi"1hP iPSBj" O`TzЙJ4F9eOBÃTs&@ |FtNB^eul-_&RBL_ǙP tXl< )H/kbJm.Y y% Bq07ۅoX)%[!ϫ">*'RjFl)}iWs4k/Cv/[(f‚")\A8qT!$t }wLv9="yЏç0S\3U@dUh|\H#}ub({3;گX7t!/l:xBnp@D)+'oX H yޮVccbe+uJv4ȭm.sص_ۍId1*_, QR*BqQHI]ܿ󮭦1_֧z"w8IW>H l,^{qUu @j]@u4H6 HFf6یko4b]22늧+PTrvNR=LF=-fR "LA薡Vmo~%o\ݧי MRhm|f>'Zq|sh;/[48p`.Oa7ndaxaEen|ʈ|7c\j3~\r/-l}z>{\cNn25%~=ZmIۣڟNbSqsDlk#nʝ]Hfu$7ͻ{{W\G){k ( g}W<⪓FT I;Eu`'_W\/0StM뾐8j9{ydV<ь]p]DA]a+Gطv#;L](䵢ϺUڴMm{ uMCj j E)Ԍ/$_u=:ahffP! ,sYЌ>U[ v88-G-GP@a;OQG)S=GJc֬ Z<5+Bm5 YFx-hͶnX9Ŧwyuk7hNΖM=o{M (jy ߴyO|~`#›,M%Ǔ.;97y簦8<Τ̟,.rwOџOSֿ߯?'Gg'Sr}x鷽ܩ0nZέwk{tv#&;_Hɚֻ(cHT{2+: W4C;^Bw XM6Z>^Z>MX]'?F{յo6Wvlg"96qdu0ΑPA?0͸k ~KDJ dmS*u7ǫ4&IJL%@S3dXcXH6X'QhC-ӌtb)]v=լwuf.-IR]RD %x4 SgQC2 ܢS.e _E2tX|Uɞ+흓NT-IάT!* i:Ar˨mRSBZJO>i BpD)+C—MVY-Dk-Q] /ô0_F%,;[ư I He4@m#i iSM(TIx" /`}Ɇʩ^cgcDxjA$ *S(cb%J$G'Aۧ5JS.42tMrv:;g+GKQ+󋏖娵~~rfŨ>M| 5\m%FPH[APbЋV9XRP)D`8@ .#xH}o4zVPb:of}*Qs0O#0a1򓦗F@o]c`$\ w%kDTL*K=DLHq9r \ 8lp.Z0υ[`K*pjm*6-7@=ҳ^r/eҁiNIR*3CsB̛!7 ّzS.X5pvζkd:kU7nkJcu~`S j!r:D:Bib`3eEۤj\2zQ=4Xf"KÌa^aD$6Z6CP<)Aڦ|YZKej`u5*EzTqM%Eɚ ;;B0 8'4ḌuR`#iz#n֣{]]kѶUwC}_bV+>1ȄjT?* =kpBu۳>nIgqbԦmb J?&j`,>DNtx `n, mZc u*L=3Vfcښ۸_a$$ U~9v+u6nk. EʼlƐ(ΐ_,hť'+Mb9RjegD4fզg}ng>}fđ`h1qR$bևI/Gmw~F'R[OGs22(4<}gK٬sb[h{d4$`X2'%ji [$3ѧ:mՑ:L->rݸ{ۚQP;Ϭw=C{/5`_ٝ (وL_{AX|÷أ@7wzK_[ v@ gxƘ;`~$ƝOD͠ԜE*Ǵh*?!KZCoGeӏ,-)5rfwyfW^>udW{?ף (c߈kUJZϝ|8TTt+zdZ֯ S__P!F5VȋATrNM kKc4;#ƈTbt)\BKCk+RBF|Ԋ=xSNR3KMy_RWSnIȰà2縵J=zc_b9UMo#{קc:A NozOk r.qq2ǖR5^o|wPZ;V{@˩׿hx1}j>flXʪ01nX,1.Wӿֿ׽zl-6ru֒|&Ħ4V.RnMub:MN8O,yԇޭ MMqa8z7Cu ޭBL;bKf8wkڰ/Dl*VyG2poNߌ]gyۛnQdi&)(_"y`-<&#^YVF VJA)JuuE5LǺձnuTSE =^J* g kjtLT$J-sj938϶Ϥge`͹-^f綈YMƮbuXE,b,?B">4ykRг+EJbh'84Qn>H+u핺J]{W⁷ȹܖ$ZDII'4]Sɚ \sPjI J&hmWRUku_<@>Lonfݰ~Sצp[o4P^[&+ޡjt 'ބ|,ywu7Yuq9srq\y7a6z0`\)e1n2U$_QqoQ.`t{J9sr˩jAk" pe{kxw%\7ZNT}69/􋌑HY\C,A>턳karx‘yui4栖\?@ Cݖn k[ؾCګ>MfبK-ISFAQTSL,2 2b\!sRs* #Z1{qjh`ᐐYѫ8!hq 0V̉}Pa #hZsb3mVsonAqK+Mqq ].A9; ̍Y  d`Cf *$ p.}y WKF/ a <4LWk!! # ,OщD+{V,Iph jӄ}HSxgmM2A݂d&Yc8 |tu@!+(ˡZ2WIeA2k *g\c"w?u9QP**D>nhJG3Qrb<#kB5X1rQ ~er#. ̆/2,'gZD!t=hyAC |/R{_{O^<߱)u(3S˽e':}QPƻ}]ᆴw"E[%fnZ WxC,GĀ̤'ҒkǤmݐL $i9QR«hOm k59Ya]]4I\0͓2'54+qR 2Omqhwe2]}Ed`dYw'qON@^I9ɝ=9:/G&q"%'JD$NjXwe`;99NܜIܘZIč8Aqc6Rje1'<klwcV]+k2p'T>R+ylM!)Sɦ+*U9W.+( @BV3` +te^ \mpMy M#iZ FX%bxqRsF'J"M..<HPh]~HqR F((ZrDA $_I7 纂⮠+(/((@+UO leLHv;е\_k}3 ,feD"3k^/@'?x]1pRnnSJzR9T:LT%Pw4Rv:eSsx4_לH\% 0= '&#YG2̀yYdҖ?y`S5XZsG EiaaN&T Fܟd譬p 1znEC)5VٳZ*Pƾ   (`a1Qy.PK\ AVW9'4*PZAHR܎Z[qf`vmq-p˖.*|߭OhF>JjM9$Q#)L[4hT`JjRW#w5>k:N P_/ޯeK&2K(r!9;f1\OcOohA >-uKv|N5xl9X)SJYF 0<"3Mٱ2d jTݸ"+ hq!U1iIRg QY%xbϱ#,V\ާoPqrtm$&Z hiAl>X3ohqizsFJQDgM0Sʈf->l$.ԖY]RA$щ&RjCD5u:ќMĜҔ-JcEgL CH+!7ScX3~\;-2 麥mT*-" dSOE+dxl)R .)\k-kb E@1?&ԇ Qk,G>yI\vZ1JArӵDp-/ڶÎ2=T)}\w %/[iD؞RZ @dQ^RW!Y^2B#0bI"\WX tC4SI#f1&%c]MҊh}EuH |_,<0}K` :I%MktQLSBH^ m\x[=O140A[}ipisK)2{ lRkORӘe$}[i w,\6DCA!(m^^VZ8BLa{Vi$ ;襏Qyf5U6al jRÐmx&h)@\왖 Or{<7^{:=VR&4;)t0} ke /fX>>M^6L8@086  ۆ:A| # !};M1eGгf0⾅=F40%4>۳R)AL*ILFi-;M%E4RP4Dh`BCaRݧM)"r@M kgоLݝT&q L^(œ{v"PIa0UXDl1Rk*(O-\RmVnwۧfiPg~9O$$@+f}2J-L1V$[eZhK}--cG2vIˈOs֩;/2Rj4$@&+5W*2R_=t5􌻆Z)Z)ۑvyh4y( ̡][e슕 cW /)$ص_'Rs;Ʈc}хh@̆ct%4\KL׫Չja__P,2Rqy2ztg7q6qh:X7 s cQPIYVBaJkCAv ]&^A~jk{^݊zLr?޵[춷Wl~xdM l0`XȕO{S*~pڂ^{eCk"~(dž|Pϫ4YFYnv3ZxǙ08@('kET{EPJez?ݼ@)S཯ 9T _o~^_6[nژsO֌ky­۞v7ۂ&>yB߃[F"֔FK}Djnu5qk)gRpJUv)vn5EI;mBQb\*8)-0-?>ps=1C:"yH0#PQ'JiL`gG1X"*l+v9n-pucf>+BqvpZ̲bT & K5r||=?TEr_+ID*"Z%>Z`%&34`x}\&b~hY`}$}jKA2|F*s/'ŧM? u1DlIri9WM>^iex[fD-FK]+uGU3w=/fV,|9lPۖY wmm lH"@A N1b;'#G"^lVuޤ!ٔNLQ3_W* V ;Z>Z%ʭx{RgZ{_|Ë0FL"ڃ`Ug6#DzIGO`[<ǦJ>]^y09@򲢧).{+SFBv.E\m3jpP\>܎-J  #ۙc[ѭZbFvjxٽ "u-2PLޘ{cvRkpw&g<%[SDD/MZHz2e{¦C.U*ݛJua#c[cB|#zߺ^PIM{؈ś+EoP\y\U@HP5_AkTcXoc& U :ȊC- "+u %,Cp~Ac 58*,:E~f\ vCk7,"SàMkv\UlZ 㪂~WsϗQ+j*s889AԓC4qOTA*֓$vVOcvJ$­glxc=T?+w> ߜ~ZUؙiR%.7&TgI4a ^tUţH("USV$ay~̅QM$MK2ucdTUp$Ap0P+`,+VO혃Z 81WVDW$q?}b*'jTq3eԶmf_6*!q /+9黓|6ýp_;糣1}뭁o?ü̶%99d1 "eHZ1n#w6)M0$C6_HTf :BpKZ0BqC̢քxC,bM1+qZ]Cd@?R Ȁ"Bp-+?#pd@5cOkҀdyE\ &OG%} ulAx47iah\fF 0a0LS^d-3sn -.)< `Aۖ,zd;Ikբ?Fji~6#D$5XPi%^)Ӕ9A9$H9BJqC 艚7Aj=۠3A03P3`ftkFffdfv13#4NtJ%"Ц:&BRGb(!0TExh/`݋ ϡ>_A-H:Q5E%qS/ 5m1t%Ԓ"4N#9fo ⇚6m15a7c) DDKdDK w5ڄ$jI Z㦥Y:lCTMP-_iej؉lDp1fQ.xl5^ýl 2)ӻLs aavrԅ 3+B Miu>kK.ޖ;rҗy Mc[HqC78ۂ lN"Lqi*]\$$2)'f!~R*lu)rF"wGIӣI drd `OgӆvE1 cqLg}'}Ǒ8i_rJp7HY֬s̺:M:#x^H'5E<:wp֔F:B#D:B:2 Cj.p4]?Ԓ h>{$XHӍ4Hm@T#BH/ME^D 9=ẽ7t.wn1f&ܽQ2Mog|~=WdpG9I8Zܨ r0_|=^0ȏz8@,Q|N?y_sqo Lz=>9R2cpc.n!% / Uay@9E;C\>z!fSO :;~=oa^]_Δv,V iJ[4鍚B F .k;=Ff\$A6V}5SZHZh:$EQB1"PDkۈ|S|f$JxViĭ oR䋺zI)YTNR|QsHJ:,j+jt2嵦$Ŕ2Sbv.ՕФniS3p`}=j#!wv3~a.oPj-O$Ņ,Ҁ`)2%r |,U+.h+PK{W|\|5S8F 5JkEKL~vh1v 1-3Qn<0ŌgY bR1v\TPDhiiFQ Rc.4lQ+yha@M޵cwu@X1 1JWCMuZy+ED,Ȳ,G8KUsYn}Q$Nn}Q HE-R8:MGBuoMuɀϣ>_F11;&fllߕIncڭ-9y"pcGs-G+(>jUCWbbggGd40vP`rNu|ts3_ Xop{ VZvVO e=_ۿ d!ů5rd/`z0E//W*=OaD+3wq mZ7iYm|郙&d2ؗkkTRK}ڷJf`:0VJlJi2UH`2?)}(!(\ih0hI)ՠ |vVc("=<ڜ3s4/hhD]ΓcHaS>[ƫ|l@9 w&jݟ~4ͯLzvcR>?/Y8|8$7ϙszDUjݤ?[n1+pKppK %=sLF\檷滋b] |fĝO)%GSQѱrj}jR-S6^cTqu"mn*נ.l/ ElQ7 %>WpO(*7;jAЪnՓ|M&3$LBzHvdŢ5V:H*FCt 0S%CvjMd'LeZkCd߄2"E#8D#,,7g'#6YWecN{)=4T2\IiQ&aIeYVbQ n !,u,HRbӂ1Ċ Z n崕#5>BLW2 1B͎Sx$0d/3bEITI.*͟K9Ө4#!LxI\|\{ҚkFYO!,"9}ō[Og{IcMw(?.9/ݍb鸺ed𾊉6^"5 <-ѺZ Ԛ5͡j-6y _^~F}A`~l8gف{$ +WX³8c(N_z/޵L2$TE8J9 /T'C}B1=1Ҿy: Jؾ?L46Ѓ>.p˅=\.`,EB(cL,.bR3)i M,RdHy^Rwo;Nзd~Q"QN9A@(""nA)2`K\8ȈC 3pG4.ɞ!z@-z7z'ѽm-oDؾ>DboX"q,Nx`7}E m:1I q(fTi hiI6E?(*-,Ž"}$a)WS ){.*S? ;Ҩ35IqaR)J Gaz鵇1'* Y㡂)A0tGJ) vI c!`Ca6j5jXP[W;]P9.C=& k0D8?Ԕ zMeqP֘P7T ՠ7VALC gjh}fNEY^ 1H/Qm B^c,%21KѦo~jWǩ}vj鰘cŅw[_ѱ5Cw#Y#q> tRL ~ f}ɰ~  .)ZƣEa9AO<"T5n6yZcztU~Haj Rٿ{1צE*צE~@G6. _ڈS5ﰩQ%ZlM{%쥌E\jIg/YOįs<{:xs}~p`8|ÈOOiP^-%xbި޹xNRT3x`oiڨ\v")WLԳX[G4*̼*N@hzQ^zf>OWpvbmU/VyY;7X& KfCo;D)~CPZcQ}9W!=Fbk}GG޵q$_iew0V/vﮱce&@QH6M22l2`[d<ȌsNRoRڃUBczD%yVPovPKɪ1:Is2sN&\~\}|U͑%e}shOxs^Fy$BSg6\Uўej5ՠ!#sVѱNc2< u,X EkkyV. kJAJvcMق}&3r-`7%( 08Z%|i 7"( &E=waV~.KoW<- 䕸 `ბWPo xs8J&̟>[F! zu̿L#WOYaݓ3 iУM𔦃&,m`UX_PơE8Na}DmMPoPvqDd}Md}3B:V"sZɝz:TAe(9xdϥF[ =Qqƴ|C31!yF9b|`M'jO-?<΄ag#eVK5=?T'M?4[?/QqSJēB#?9gk{ CV3G<'ZU!e4/d1Bin̮q]*twШ{-KDm!*j- =ZVDNEWǸ" TFO0bP*몵E*pSOjn zNYp,z}׫; m|H/9#wjٗ}Φ<ہռ3:. eXYZh!1cwgujqo[n bNnOww!y-9ru+1B^S&.0[5j{dw=LG Ñ-ώ{<`(źDH=ѽY iz(ƬJUi6MB#QvDxëH=: CO7֑qz&';%ġž*6vR%yZREnjKu,iDY*5 7$OT-/]kڗǡ֪|j+߇zpK7tSK7)7*r Y j؅0.lBC-„fcdBC&4dBC&4aB#k:T:4Lb EprTUQއzqN磮?*!*8Ԓ*:4Q?ֲOsJ*N8*N8Ԟ*N:ډNTuSlIeQw sSVL\ 4jo*aQ{*!Q;ʶ(ZCNmQ‡E-E >4Ժd ~ԛmԞ{C>Q*|/‡Z -q?{Cs{]5d:TlwLţpTcW 6Q\fwnU6F)$ka7N7!9k W?Z]_qd欎 1wߺEßm{+3w<fӚ֝9Zk8s:fuj FuNT^ <KmYfHG">3D2;Sf9rßoܤW=Ôz>Fo:p~yz1o闟W.ݫ\irp~lgyk-nS?`߫ffpQjXvo{ }BS]4ɬK^s 7ߴJw {(տ=4sc\h^c|9׿.좥P/Z _-3ވ_ВnB2ͿQU?kz^t7#y{#Wȫ[Kǐ|a 7r3EEx4^kC();D«~l_O~u̬WSO'g ޟ\qT52,zguW,k8in/ŧ?V8z,On͸f}D6W㲆\4M eU(CDGь;Nʐ)CD$;VL|ewy_'T69)*TEاVT^u0"Qk!t2qH{ߗⷹ?y/-W1-t?'{2K"_RZ u^r ԐKTvqfsP{!6ۨa .x]0n/Z)LWP;Vof翬.ϚuхCwwbCZ gvnIܧ8݄5*> + lavyqr:e ݇ysַ~HW]sζz0cpd;~[}8 D07U(p-7Uh}S5>8԰UbPoQeb+XĊެxi*.UOFv̨*8*T8"0Z:8:U^"CmRGfjȉ',Hg4E@ 2KJi^ 1-7>2nBYޤ6'y,\Fjeh\ƹUpO*#{;ͨK ^x *ˏzIƪ?PTql3bUԟqõڰ**{{nyOg?S/"O\%?P{c?PTԅ+*R?Q݆6TIۨ{Y%g"YZ*X58K_䧨;8G͙R05JTsBYEևTFk3 UQVzCuL՟U$XM:UgjSIkɯQ{L՟u2RBm*gj?P'wb)\Ơ::0Fg?S/#tCmR?P[k?{=wTsBYGtd~Jdd·:R0V\nw#[eifVp_a8jɺ;+!R+۵AF1{Ek2RK2|30)[8 #ax1֗L9'j2UZ ,۫~ )#$+LvT=o[y<Ӵ.C?2.1uBPmeCu0{`v^qt8VKB!TQWnX$O˔X Qe*4tjIDv"jaZ2jEpz%-hhZzˢ禌!UTFj0ُzZ(jCdCa9\0:Elyp]! Bigu7±Dz)X3U{"&r9QsUmSDmV젶/x⥫>l=@"9pZAۙQ;5wwrr!à\:u8ԩ쁝Z S 0R<6L Qr^HI$bGx'Qu\P:=QK-|ՆCjX[ g6j6j{Wm>r=s7VUUXeCA YeUDF@T!wGv*8ڸ*8V!wGNCF1$w';IB"" Z[m=LڥJ ɉHڄ&Lf&'"od-S+-Dh@frOi&'"o쫘Dx@j#Q+\D6zQΨH@R=edV'9WGjl(](#Ψ=Tz2Ҥ]vMh QuWǡRUFRS7/GJ?ZF Ntuhn *dЃaY*,pepO#]U l 2=U,,Ȫf왷Ώo~=gF8҄&y?W[ {TSZJ$;|8MDžɣ̛訨JPNJM(&U;T:oBՑ7P fOxԛmZRKp@WH^tFUPǡ6wa:uhqHV]|?ОN0TrZj8*%e-)kVF*:R=Tt̨,NQZ{8Xw5L敷tEÕh skY CC:K#!U[m%{Ԃ ]HH¶J-QR%;~3fXoz} ܎fg^p`pU VSJ9Tc8:sS j0g¡阅ڳ*8.pw6jt}ߢE+4_4"w jE#Hԩ^}G}-?qdO;"߀ !踻ȵOMXHlߑBP8˦`m+\'G=%a+9c'tb|82SDELWzTT.-Cgp(s#n^-LԙcԙQo*3Xc-2)kXRV=l'A݈_[W|ߪ:SgJ!>#ubzyW'_@u+!A=LԍV8}FjI!;u^rNԚC$vgۑmE>}lgW/ 14876ms (07:22:26.682) Feb 01 07:22:26 crc kubenswrapper[4835]: Trace[202954038]: [14.87684526s] [14.87684526s] END Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.683040 4835 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.683057 4835 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.685118 4835 trace.go:236] Trace[1147390178]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Feb-2026 07:22:11.820) (total time: 14864ms): Feb 01 07:22:26 crc kubenswrapper[4835]: Trace[1147390178]: ---"Objects listed" error: 14864ms (07:22:26.684) Feb 01 07:22:26 crc kubenswrapper[4835]: Trace[1147390178]: [14.86421575s] [14.86421575s] END Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.685170 4835 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 01 07:22:26 crc kubenswrapper[4835]: E0201 07:22:26.686201 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.689641 4835 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.726169 4835 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55990->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.726244 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55990->192.168.126.11:17697: read: connection reset by peer" Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.726607 4835 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.726654 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.726909 4835 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.726940 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.476710 4835 apiserver.go:52] "Watching apiserver" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.480885 4835 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.481296 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.481843 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.482056 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.482103 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.482248 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.482272 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.482370 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.482433 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.482520 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.482588 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.487519 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.491481 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.491688 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.491801 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.492002 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.492250 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.492629 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.493062 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.493313 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.497779 4835 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.502358 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:30:46.872176828 +0000 UTC Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.536333 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.553164 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.576084 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.588178 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.588235 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.589523 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.589960 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590011 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590042 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590073 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590110 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590132 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590152 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590173 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590160 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590197 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590332 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590388 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590570 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590634 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590682 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.592008 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593638 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593709 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593764 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590583 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.591225 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.591701 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.591782 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.592123 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.592164 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.592619 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.592767 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593029 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593110 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593611 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593762 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593820 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594020 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594060 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594093 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594126 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594157 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594189 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594495 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594495 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594528 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594568 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594599 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594629 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594659 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594689 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594723 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594758 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594791 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594821 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594852 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594881 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594913 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594943 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594973 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595019 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595050 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595108 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595141 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595140 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595176 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595225 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595270 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595311 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595343 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595378 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595447 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595480 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595511 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595544 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595576 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595618 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595669 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595717 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595749 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595765 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595781 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595912 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596211 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596279 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596333 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596385 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596445 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596476 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596530 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596583 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596634 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596683 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596688 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596735 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596793 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596844 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596893 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596944 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596997 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597049 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597104 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597155 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597204 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597255 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597305 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597356 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597403 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597504 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597564 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597616 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597743 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597815 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597878 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597928 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597960 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597979 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598033 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598085 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598139 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598188 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598235 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598285 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598462 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598478 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598540 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.599284 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:22:28.099255386 +0000 UTC m=+21.219691830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.600456 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.600504 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.600511 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.600536 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.600646 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.600914 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.600932 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.601323 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.601468 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.601545 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.601806 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.601818 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602159 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602223 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602457 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602550 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602618 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602659 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602680 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602724 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602739 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602815 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602942 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602946 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602990 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603031 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603064 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603129 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603163 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603193 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603223 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603254 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603284 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603317 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603322 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603349 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603402 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603473 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603537 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603681 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603696 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603743 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603800 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603819 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603869 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603930 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604002 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604054 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604094 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604106 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604265 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604314 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604318 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604342 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604354 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604506 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604558 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604594 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604629 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604663 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604697 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604730 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604764 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604789 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604801 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604870 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604907 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605035 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605090 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605142 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605193 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605255 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605303 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605348 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605397 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605512 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605560 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605609 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605655 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605702 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605753 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605801 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605846 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605893 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605941 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605991 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606042 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606092 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606141 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606195 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606250 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606305 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606365 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606787 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606851 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606906 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606964 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607015 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607070 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607160 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607230 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607286 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607338 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607389 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607479 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607520 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607555 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607591 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607627 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607660 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607696 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607734 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607806 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607853 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607880 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607909 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607961 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608002 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608040 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608077 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608114 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608126 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608153 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608206 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608256 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608337 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608393 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608556 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608583 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608627 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608770 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608806 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608838 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608870 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608902 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608914 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608934 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608968 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609001 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609031 4835 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609063 4835 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609093 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609123 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609153 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609183 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609215 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609253 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609285 4835 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609317 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609348 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609377 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609407 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609470 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609501 4835 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609532 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609561 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609589 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609619 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609647 4835 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609678 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609708 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609739 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609769 4835 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609800 4835 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609832 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609864 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609895 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609924 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609951 4835 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609983 4835 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.610018 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.610048 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.610079 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.610111 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608928 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.610119 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609276 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609665 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.610181 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.610239 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609774 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.611984 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.612041 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.612160 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.612184 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.612784 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.613015 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.612981 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.613556 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.613695 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.614779 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.614827 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.614924 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.615706 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.615853 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.615916 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.616209 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.616277 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.616499 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.616531 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.616727 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.616944 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617053 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617094 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617196 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617216 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617479 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617558 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617737 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617768 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.618083 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.619009 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.619213 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.619314 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:28.119275799 +0000 UTC m=+21.239712273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.619792 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.619819 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.619840 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.620704 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.620840 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.620939 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.621059 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.622790 4835 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.622982 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.624270 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.624514 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.624506 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.624650 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.624922 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.625190 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.612003 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.625821 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.625852 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.625900 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626251 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626328 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626354 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626381 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626392 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626384 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626729 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626871 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.627190 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.627512 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.628243 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.628922 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.629055 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.629609 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.630124 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.630756 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.630961 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.631126 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.631187 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.631239 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.631660 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.631720 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.631855 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.631912 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.632400 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.632966 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.633131 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.633261 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.636239 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.636274 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.636810 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.636944 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:28.136904323 +0000 UTC m=+21.257340807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.637088 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.637879 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.638262 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.638668 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.638780 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.639037 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.639294 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.639042 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.639511 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.639571 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.642042 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.644599 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.646953 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.649014 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.651329 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.656681 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.657335 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.657858 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.657887 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.657902 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.657969 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:28.157946752 +0000 UTC m=+21.278383196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.658240 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.658390 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.658467 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.658482 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.658518 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:28.158507436 +0000 UTC m=+21.278943880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.659196 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.663133 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.663530 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.663755 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.663848 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.665547 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.667864 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.669586 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.669737 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.669754 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.669860 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.670099 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.670237 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.670507 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.671296 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.674850 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.674999 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.675083 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.675284 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.675141 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.675649 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.675728 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.675953 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.679554 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.679745 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.679795 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.680246 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.680340 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.680461 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.680493 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.680569 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.681019 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.681144 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.681891 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.681955 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.682040 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.682287 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.682383 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683078 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683188 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683230 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683477 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683534 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683714 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683948 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683954 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683998 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.685623 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.685632 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.685786 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.684640 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.685928 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.686439 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.686528 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.692926 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.694157 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.697563 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88" exitCode=255 Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.697628 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88"} Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.707306 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712603 4835 scope.go:117] "RemoveContainer" containerID="39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712664 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712741 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712824 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712845 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712860 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712872 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712884 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712896 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712908 4835 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712922 4835 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713065 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713109 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713139 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713157 4835 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713171 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713186 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713200 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713214 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713225 4835 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713237 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713248 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713260 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713271 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713282 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713295 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713306 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713318 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713329 4835 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713341 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713352 4835 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713364 4835 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713375 4835 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713387 4835 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713400 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713443 4835 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713458 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713472 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713488 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713502 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713517 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713530 4835 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713543 4835 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713560 4835 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713636 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713681 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713714 4835 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713742 4835 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713772 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713801 4835 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713828 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713855 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713883 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713903 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713910 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713987 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714003 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714042 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714055 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714233 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714272 4835 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714299 4835 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714326 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714352 4835 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714379 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714404 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714475 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714503 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714527 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714551 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714577 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714603 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714630 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714656 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714684 4835 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714709 4835 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714737 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714765 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714794 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714822 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714850 4835 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714877 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714904 4835 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714931 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714957 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714984 4835 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715010 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715037 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715062 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715088 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715113 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715173 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715203 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715228 4835 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715258 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715286 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715311 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715337 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715367 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715394 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715553 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715572 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715584 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715596 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715608 4835 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715619 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715631 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715645 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715656 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715669 4835 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715680 4835 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715693 4835 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715704 4835 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715716 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715727 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715739 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715752 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715764 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715775 4835 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715787 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715798 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715810 4835 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715823 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715835 4835 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715847 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715858 4835 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715869 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715881 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715894 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715905 4835 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715917 4835 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715928 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715940 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715957 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715970 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715981 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715992 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716003 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716014 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716026 4835 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716037 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716048 4835 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716059 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716071 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716084 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716097 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716109 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716120 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716131 4835 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716160 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716174 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716188 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716201 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716213 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.725240 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.732809 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.732932 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.745847 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.746347 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.757358 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.767709 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.778611 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.788457 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.798588 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.805923 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.810129 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.816780 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.816823 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.816841 4835 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: W0201 07:22:27.819834 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-b86ae587f6a2b13900dfb26eb7b6e37e3713336862a41e4a8a2f1668adef115e WatchSource:0}: Error finding container b86ae587f6a2b13900dfb26eb7b6e37e3713336862a41e4a8a2f1668adef115e: Status 404 returned error can't find the container with id b86ae587f6a2b13900dfb26eb7b6e37e3713336862a41e4a8a2f1668adef115e Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.820578 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.824075 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.832069 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.832257 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: W0201 07:22:27.843555 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-62ac44151f8c113bdf04964eeb0f36fed31f8c3623d257efc91d87091d9f904e WatchSource:0}: Error finding container 62ac44151f8c113bdf04964eeb0f36fed31f8c3623d257efc91d87091d9f904e: Status 404 returned error can't find the container with id 62ac44151f8c113bdf04964eeb0f36fed31f8c3623d257efc91d87091d9f904e Feb 01 07:22:27 crc kubenswrapper[4835]: W0201 07:22:27.853099 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-71e5ee8280fe8e643b0d25617f14af9cf60893b7a85f2b7e4a2b07b02bc2f9b5 WatchSource:0}: Error finding container 71e5ee8280fe8e643b0d25617f14af9cf60893b7a85f2b7e4a2b07b02bc2f9b5: Status 404 returned error can't find the container with id 71e5ee8280fe8e643b0d25617f14af9cf60893b7a85f2b7e4a2b07b02bc2f9b5 Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.934965 4835 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.119171 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.119321 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:22:29.119307377 +0000 UTC m=+22.239743811 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.220345 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.220533 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.220618 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220638 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.220666 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220682 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220704 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220779 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:29.220751758 +0000 UTC m=+22.341188222 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220799 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220903 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:29.220874021 +0000 UTC m=+22.341310495 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220903 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220800 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220976 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220999 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.221037 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:29.221006235 +0000 UTC m=+22.341442709 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.221074 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:29.221059946 +0000 UTC m=+22.341496410 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.502788 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:41:07.879752202 +0000 UTC Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.566456 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.566571 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.701206 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"62ac44151f8c113bdf04964eeb0f36fed31f8c3623d257efc91d87091d9f904e"} Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.703463 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292"} Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.703493 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4"} Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.703508 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b86ae587f6a2b13900dfb26eb7b6e37e3713336862a41e4a8a2f1668adef115e"} Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.705595 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.707346 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9"} Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.707630 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.708792 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c"} Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.708827 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"71e5ee8280fe8e643b0d25617f14af9cf60893b7a85f2b7e4a2b07b02bc2f9b5"} Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.720696 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.731269 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.744246 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.760449 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.782950 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.799347 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.816821 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.834740 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.852533 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.871403 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.884662 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.898057 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.914988 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.928789 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.129751 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.130023 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:22:31.129983529 +0000 UTC m=+24.250419993 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.230597 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.230662 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.230696 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.230735 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230831 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230869 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230903 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230912 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:31.230891107 +0000 UTC m=+24.351327561 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230915 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230925 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230965 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:31.230954719 +0000 UTC m=+24.351391163 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230868 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.231031 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.231038 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:31.2310113 +0000 UTC m=+24.351447764 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.231051 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.231137 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:31.231107272 +0000 UTC m=+24.351543736 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.503216 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:25:41.328745264 +0000 UTC Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.566147 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.566328 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.566491 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.566670 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.572479 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.573824 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.576019 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.577240 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.579392 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.580749 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.582022 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.585089 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.586532 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.588828 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.590066 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.592254 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.592934 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.593750 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.594939 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.595621 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.596998 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.597672 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.598533 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.599981 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.600757 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.602110 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.602797 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.604329 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.605018 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.605856 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.607373 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.608050 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.609373 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.610354 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.611662 4835 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.611847 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.614240 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.615398 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.615989 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.618280 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.619300 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.620710 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.621728 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.623529 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.624218 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.625896 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.626900 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.628683 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.629135 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.630046 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.630694 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.631829 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.632295 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.633103 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.633640 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.634518 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.635393 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.636123 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.763531 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.769333 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.773288 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.780189 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.794097 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.811080 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.829524 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.853569 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.871769 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.888860 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.906551 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.925886 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.943374 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.956389 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.968855 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.980923 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.997307 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:30 crc kubenswrapper[4835]: I0201 07:22:30.009264 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:30Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:30 crc kubenswrapper[4835]: I0201 07:22:30.503973 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:22:28.180010113 +0000 UTC Feb 01 07:22:30 crc kubenswrapper[4835]: I0201 07:22:30.565783 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:30 crc kubenswrapper[4835]: E0201 07:22:30.565971 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:30 crc kubenswrapper[4835]: E0201 07:22:30.728531 4835 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.147887 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.148183 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:22:35.148137922 +0000 UTC m=+28.268574406 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.249478 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.249555 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.249600 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.249636 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249694 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249759 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249783 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249820 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249839 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:35.249801399 +0000 UTC m=+28.370237863 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249839 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249872 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:35.24985886 +0000 UTC m=+28.370295334 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249912 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:35.249888701 +0000 UTC m=+28.370325175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249943 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249999 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.250022 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.250130 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:35.250103956 +0000 UTC m=+28.370540420 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.504629 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:14:57.17944135 +0000 UTC Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.566324 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.566397 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.566488 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.566577 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.718340 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de"} Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.736858 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.750672 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.766773 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.786775 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.803248 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.817877 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.840229 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.856214 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.357830 4835 csr.go:261] certificate signing request csr-c5d78 is approved, waiting to be issued Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.373801 4835 csr.go:257] certificate signing request csr-c5d78 is issued Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.488032 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-d8kfl"] Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.488304 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:32 crc kubenswrapper[4835]: W0201 07:22:32.490999 4835 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.491036 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 01 07:22:32 crc kubenswrapper[4835]: W0201 07:22:32.491077 4835 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.491090 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 01 07:22:32 crc kubenswrapper[4835]: W0201 07:22:32.492310 4835 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.492338 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.504941 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:42:15.67744549 +0000 UTC Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.506828 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.532276 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.549456 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.559268 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tp8v\" (UniqueName: \"kubernetes.io/projected/0c6d0e64-7406-4a2b-8006-8381549b35e6-kube-api-access-6tp8v\") pod \"node-resolver-d8kfl\" (UID: \"0c6d0e64-7406-4a2b-8006-8381549b35e6\") " pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.559309 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0c6d0e64-7406-4a2b-8006-8381549b35e6-hosts-file\") pod \"node-resolver-d8kfl\" (UID: \"0c6d0e64-7406-4a2b-8006-8381549b35e6\") " pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.566601 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.566699 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.568390 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.589310 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.613607 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.628748 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.642043 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.654267 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.660070 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0c6d0e64-7406-4a2b-8006-8381549b35e6-hosts-file\") pod \"node-resolver-d8kfl\" (UID: \"0c6d0e64-7406-4a2b-8006-8381549b35e6\") " pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.659898 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0c6d0e64-7406-4a2b-8006-8381549b35e6-hosts-file\") pod \"node-resolver-d8kfl\" (UID: \"0c6d0e64-7406-4a2b-8006-8381549b35e6\") " pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.660733 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tp8v\" (UniqueName: \"kubernetes.io/projected/0c6d0e64-7406-4a2b-8006-8381549b35e6-kube-api-access-6tp8v\") pod \"node-resolver-d8kfl\" (UID: \"0c6d0e64-7406-4a2b-8006-8381549b35e6\") " pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.966629 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-25s9j"] Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.966905 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-25s9j" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.967438 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-qtzjl"] Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.968403 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.969595 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.970577 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wdt78"] Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.971479 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.973442 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.973586 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.982135 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.982388 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.982707 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.982866 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 01 07:22:32 crc kubenswrapper[4835]: W0201 07:22:32.982991 4835 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.983039 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 01 07:22:32 crc kubenswrapper[4835]: W0201 07:22:32.983089 4835 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.983102 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 01 07:22:32 crc kubenswrapper[4835]: W0201 07:22:32.983143 4835 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.983156 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 01 07:22:32 crc kubenswrapper[4835]: W0201 07:22:32.983194 4835 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.983208 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.986296 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.993311 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.019016 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.047622 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063632 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-multus-certs\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063672 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-daemon-config\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063690 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-netns\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063717 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/303c450e-4b2d-4908-84e6-df8b444ed640-mcd-auth-proxy-config\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063734 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpvhf\" (UniqueName: \"kubernetes.io/projected/303c450e-4b2d-4908-84e6-df8b444ed640-kube-api-access-jpvhf\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063749 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-os-release\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063763 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-cni-binary-copy\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063777 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063797 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/303c450e-4b2d-4908-84e6-df8b444ed640-rootfs\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063811 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-etc-kubernetes\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063827 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00cf5926-f943-44c0-a351-db83ab17c2a1-cni-binary-copy\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063842 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-cni-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063858 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-socket-dir-parent\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063871 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-hostroot\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063894 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-system-cni-dir\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063911 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/00cf5926-f943-44c0-a351-db83ab17c2a1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063955 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-cni-bin\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063972 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-k8s-cni-cncf-io\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063987 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-kubelet\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064001 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-conf-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064018 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksb2t\" (UniqueName: \"kubernetes.io/projected/00cf5926-f943-44c0-a351-db83ab17c2a1-kube-api-access-ksb2t\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064032 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-os-release\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064046 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-cni-multus\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064065 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/303c450e-4b2d-4908-84e6-df8b444ed640-proxy-tls\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064079 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-system-cni-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064093 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-cnibin\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064107 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-cnibin\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064127 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwv4d\" (UniqueName: \"kubernetes.io/projected/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-kube-api-access-qwv4d\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.081552 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.086672 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.088759 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.088793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.088803 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.088915 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.109952 4835 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.110247 4835 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.111209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.111251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.111260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.111274 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.111283 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.111908 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.150555 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.164923 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-os-release\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.164970 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-netns\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165011 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/303c450e-4b2d-4908-84e6-df8b444ed640-mcd-auth-proxy-config\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165036 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpvhf\" (UniqueName: \"kubernetes.io/projected/303c450e-4b2d-4908-84e6-df8b444ed640-kube-api-access-jpvhf\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165111 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-os-release\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165230 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-netns\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165068 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-cni-binary-copy\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165536 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165675 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/303c450e-4b2d-4908-84e6-df8b444ed640-rootfs\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165718 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/303c450e-4b2d-4908-84e6-df8b444ed640-mcd-auth-proxy-config\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165750 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-cni-binary-copy\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165571 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/303c450e-4b2d-4908-84e6-df8b444ed640-rootfs\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165801 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-etc-kubernetes\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165825 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00cf5926-f943-44c0-a351-db83ab17c2a1-cni-binary-copy\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165845 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-cni-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165861 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-socket-dir-parent\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165883 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/00cf5926-f943-44c0-a351-db83ab17c2a1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165903 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-hostroot\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165862 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-etc-kubernetes\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165960 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-system-cni-dir\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165938 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-system-cni-dir\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166004 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-socket-dir-parent\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166013 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-cni-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166015 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-cni-bin\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166044 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-hostroot\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166047 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-cni-bin\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166053 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-conf-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166079 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-conf-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166111 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-k8s-cni-cncf-io\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166131 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-kubelet\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166155 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksb2t\" (UniqueName: \"kubernetes.io/projected/00cf5926-f943-44c0-a351-db83ab17c2a1-kube-api-access-ksb2t\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166179 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-os-release\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166196 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-cni-multus\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166213 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-kubelet\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166229 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/303c450e-4b2d-4908-84e6-df8b444ed640-proxy-tls\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166247 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-system-cni-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166271 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-cnibin\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166283 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-os-release\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166283 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00cf5926-f943-44c0-a351-db83ab17c2a1-cni-binary-copy\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166293 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-cnibin\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166312 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwv4d\" (UniqueName: \"kubernetes.io/projected/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-kube-api-access-qwv4d\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166322 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-system-cni-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166330 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-cni-multus\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166338 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-multus-certs\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166267 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166343 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/00cf5926-f943-44c0-a351-db83ab17c2a1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166364 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-daemon-config\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166369 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-cnibin\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166247 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-k8s-cni-cncf-io\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166438 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-multus-certs\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166446 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-cnibin\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166903 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-daemon-config\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.170389 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.172748 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.176696 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.176725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.176733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.176746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.176756 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.192608 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwv4d\" (UniqueName: \"kubernetes.io/projected/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-kube-api-access-qwv4d\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.198685 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.198742 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.198833 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksb2t\" (UniqueName: \"kubernetes.io/projected/00cf5926-f943-44c0-a351-db83ab17c2a1-kube-api-access-ksb2t\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.204161 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.204199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.204208 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.204222 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.204231 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.211185 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.215830 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.219068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.219130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.219140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.219153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.219162 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.226237 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.229714 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.234976 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.235015 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.235024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.235039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.235056 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.242071 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.250969 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.251106 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.252738 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.252774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.252784 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.252799 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.252809 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.259582 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.269172 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.281045 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.283435 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: W0201 07:22:33.295265 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9342eb7_b5ae_47b2_a56d_91ae886e5f0e.slice/crio-75288a7694f79f9f3fd591cd8ac9e443b8f1884c02c2dbc728156263a8802025 WatchSource:0}: Error finding container 75288a7694f79f9f3fd591cd8ac9e443b8f1884c02c2dbc728156263a8802025: Status 404 returned error can't find the container with id 75288a7694f79f9f3fd591cd8ac9e443b8f1884c02c2dbc728156263a8802025 Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.295614 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.300299 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.316987 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.330720 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.344274 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.357996 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.358037 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.358049 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.358066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.358078 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.358081 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.366798 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5z5dl"] Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.367951 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.370535 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.371053 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.371369 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.371488 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.371534 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.371557 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.372790 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.372907 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.374150 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.375150 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-01 07:17:32 +0000 UTC, rotation deadline is 2026-12-21 10:06:42.778514578 +0000 UTC Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.375254 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7754h44m9.403264278s for next certificate rotation Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.384368 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.399351 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.412229 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.424259 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.436249 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.454562 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.462926 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.462980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.462998 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.463020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.463038 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469044 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469088 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-ovn-kubernetes\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469113 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-etc-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469139 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x78ft\" (UniqueName: \"kubernetes.io/projected/bd62f19b-07ab-4cc5-84a3-2f097c278de7-kube-api-access-x78ft\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469181 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovn-node-metrics-cert\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469204 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-systemd\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469234 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-slash\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469257 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-bin\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469279 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-script-lib\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469314 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469348 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-ovn\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469375 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-kubelet\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469397 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-netns\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469454 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-log-socket\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469489 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-node-log\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469509 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-config\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469534 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-netd\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469554 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-env-overrides\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469578 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-systemd-units\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469601 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-var-lib-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.478602 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.506375 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:17:49.336092164 +0000 UTC Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.506492 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.510156 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.522517 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.548231 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.560425 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.565239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.565278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.565289 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.565306 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.565318 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.565770 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.565873 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.565902 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.566006 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570017 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-systemd\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570064 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-slash\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570098 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-bin\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570128 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-systemd\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570195 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-bin\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570141 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-slash\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570125 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-script-lib\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570343 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570388 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-ovn\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570428 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-kubelet\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570447 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-netns\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570465 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-log-socket\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570497 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-node-log\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570513 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-config\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570508 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-ovn\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570538 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-netd\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570541 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-netns\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570577 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-netd\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570613 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-kubelet\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570619 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-node-log\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570651 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-env-overrides\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570683 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-systemd-units\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570700 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-var-lib-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570735 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570758 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-ovn-kubernetes\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570787 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-etc-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570811 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x78ft\" (UniqueName: \"kubernetes.io/projected/bd62f19b-07ab-4cc5-84a3-2f097c278de7-kube-api-access-x78ft\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570857 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-systemd-units\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570898 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-ovn-kubernetes\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570857 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-var-lib-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570903 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-script-lib\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570937 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570923 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-etc-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570861 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovn-node-metrics-cert\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570632 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-log-socket\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.571296 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-env-overrides\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.571379 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.571433 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-config\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.574364 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.576829 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovn-node-metrics-cert\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.589373 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.591502 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x78ft\" (UniqueName: \"kubernetes.io/projected/bd62f19b-07ab-4cc5-84a3-2f097c278de7-kube-api-access-x78ft\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.605530 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.617113 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.668051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.668092 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.668101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.668116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.668127 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.678090 4835 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.678138 4835 projected.go:194] Error preparing data for projected volume kube-api-access-6tp8v for pod openshift-dns/node-resolver-d8kfl: failed to sync configmap cache: timed out waiting for the condition Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.678186 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c6d0e64-7406-4a2b-8006-8381549b35e6-kube-api-access-6tp8v podName:0c6d0e64-7406-4a2b-8006-8381549b35e6 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:34.17816963 +0000 UTC m=+27.298606054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6tp8v" (UniqueName: "kubernetes.io/projected/0c6d0e64-7406-4a2b-8006-8381549b35e6-kube-api-access-6tp8v") pod "node-resolver-d8kfl" (UID: "0c6d0e64-7406-4a2b-8006-8381549b35e6") : failed to sync configmap cache: timed out waiting for the condition Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.680380 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.687593 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 01 07:22:33 crc kubenswrapper[4835]: W0201 07:22:33.693998 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd62f19b_07ab_4cc5_84a3_2f097c278de7.slice/crio-f2c33318aecd4d2a27c36deae504704dd76ecedc9768925c3ee036665f4c99e8 WatchSource:0}: Error finding container f2c33318aecd4d2a27c36deae504704dd76ecedc9768925c3ee036665f4c99e8: Status 404 returned error can't find the container with id f2c33318aecd4d2a27c36deae504704dd76ecedc9768925c3ee036665f4c99e8 Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.728610 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"f2c33318aecd4d2a27c36deae504704dd76ecedc9768925c3ee036665f4c99e8"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.730700 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerStarted","Data":"3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.730787 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerStarted","Data":"6e426f333048639d95a80b52286ac07a23c058dc4a44f49da8cb6d15b2530297"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.732296 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerStarted","Data":"213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.732350 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerStarted","Data":"75288a7694f79f9f3fd591cd8ac9e443b8f1884c02c2dbc728156263a8802025"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.753461 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.772582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.772753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.772776 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.772800 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.772818 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.778674 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.795521 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.813370 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.840014 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.841293 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.850545 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.860864 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/303c450e-4b2d-4908-84e6-df8b444ed640-proxy-tls\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.874888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.874930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.874941 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.874958 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.874971 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.876719 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.887827 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.897974 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.910146 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.920271 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.921861 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.933381 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.947465 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.958238 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.972331 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.977656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.977694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.977703 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.977719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.977730 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.995083 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.009359 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.022504 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.038618 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.061090 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.074960 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.079993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.080044 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.080059 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.080082 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.080098 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.093453 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.107268 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.126075 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.147358 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.165701 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: E0201 07:22:34.182652 4835 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.182667 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: E0201 07:22:34.182709 4835 projected.go:194] Error preparing data for projected volume kube-api-access-jpvhf for pod openshift-machine-config-operator/machine-config-daemon-wdt78: failed to sync configmap cache: timed out waiting for the condition Feb 01 07:22:34 crc kubenswrapper[4835]: E0201 07:22:34.182770 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/303c450e-4b2d-4908-84e6-df8b444ed640-kube-api-access-jpvhf podName:303c450e-4b2d-4908-84e6-df8b444ed640 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:34.682749462 +0000 UTC m=+27.803185906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jpvhf" (UniqueName: "kubernetes.io/projected/303c450e-4b2d-4908-84e6-df8b444ed640-kube-api-access-jpvhf") pod "machine-config-daemon-wdt78" (UID: "303c450e-4b2d-4908-84e6-df8b444ed640") : failed to sync configmap cache: timed out waiting for the condition Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.182784 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.182837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.182850 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.182873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.182890 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.276809 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tp8v\" (UniqueName: \"kubernetes.io/projected/0c6d0e64-7406-4a2b-8006-8381549b35e6-kube-api-access-6tp8v\") pod \"node-resolver-d8kfl\" (UID: \"0c6d0e64-7406-4a2b-8006-8381549b35e6\") " pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.282289 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tp8v\" (UniqueName: \"kubernetes.io/projected/0c6d0e64-7406-4a2b-8006-8381549b35e6-kube-api-access-6tp8v\") pod \"node-resolver-d8kfl\" (UID: \"0c6d0e64-7406-4a2b-8006-8381549b35e6\") " pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.285367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.285458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.285476 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.285501 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.285520 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.301743 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.304180 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 01 07:22:34 crc kubenswrapper[4835]: W0201 07:22:34.315308 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c6d0e64_7406_4a2b_8006_8381549b35e6.slice/crio-d81811d48a9723c81dce6c75e322f5295875e52a0045f988a1cf94fb861eb255 WatchSource:0}: Error finding container d81811d48a9723c81dce6c75e322f5295875e52a0045f988a1cf94fb861eb255: Status 404 returned error can't find the container with id d81811d48a9723c81dce6c75e322f5295875e52a0045f988a1cf94fb861eb255 Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.387568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.387605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.387638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.388717 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.388767 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.491966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.492007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.492046 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.492067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.492078 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.507530 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:23:47.639314025 +0000 UTC Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.566875 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:34 crc kubenswrapper[4835]: E0201 07:22:34.567086 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.594475 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.594517 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.594533 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.594554 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.594568 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.700726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.700763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.700773 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.700788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.700799 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.736720 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764" exitCode=0 Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.736788 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.740224 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-d8kfl" event={"ID":"0c6d0e64-7406-4a2b-8006-8381549b35e6","Type":"ContainerStarted","Data":"e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.740340 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-d8kfl" event={"ID":"0c6d0e64-7406-4a2b-8006-8381549b35e6","Type":"ContainerStarted","Data":"d81811d48a9723c81dce6c75e322f5295875e52a0045f988a1cf94fb861eb255"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.742092 4835 generic.go:334] "Generic (PLEG): container finished" podID="00cf5926-f943-44c0-a351-db83ab17c2a1" containerID="3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2" exitCode=0 Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.742137 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerDied","Data":"3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.756246 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.772257 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.784029 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpvhf\" (UniqueName: \"kubernetes.io/projected/303c450e-4b2d-4908-84e6-df8b444ed640-kube-api-access-jpvhf\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.790110 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpvhf\" (UniqueName: \"kubernetes.io/projected/303c450e-4b2d-4908-84e6-df8b444ed640-kube-api-access-jpvhf\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.791435 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.801865 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.802835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.802881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.802898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.802921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.802936 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.811183 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: W0201 07:22:34.819601 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod303c450e_4b2d_4908_84e6_df8b444ed640.slice/crio-36357963120e59ecd5d22213f9bd0b316d664159db2bb8a508d29de03da3fb3a WatchSource:0}: Error finding container 36357963120e59ecd5d22213f9bd0b316d664159db2bb8a508d29de03da3fb3a: Status 404 returned error can't find the container with id 36357963120e59ecd5d22213f9bd0b316d664159db2bb8a508d29de03da3fb3a Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.824550 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.843539 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.858820 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.880926 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.899074 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.904834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.904870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.904882 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.904899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.904911 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.916894 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.933904 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.952163 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.981200 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.997654 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.006537 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.006577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.006590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.006611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.006625 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.008615 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.020677 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.032310 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.049391 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.067956 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.082960 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.097812 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.109057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.109101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.109115 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.109135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.109147 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.120050 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.139307 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.158086 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.173629 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.189112 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.189374 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:22:43.189338941 +0000 UTC m=+36.309775405 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.190769 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.211972 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.212016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.212027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.212044 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.212056 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.233095 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-l7rwg"] Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.233498 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.235251 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.235397 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.235425 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.235534 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.247917 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.263950 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.289926 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/96856bc5-b4b0-4268-8868-65a584408ca7-serviceca\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.289978 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.290017 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.290056 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.290086 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.290119 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2t5v\" (UniqueName: \"kubernetes.io/projected/96856bc5-b4b0-4268-8868-65a584408ca7-kube-api-access-d2t5v\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290142 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290219 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:43.290198178 +0000 UTC m=+36.410634712 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.290153 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96856bc5-b4b0-4268-8868-65a584408ca7-host\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290235 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290307 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:43.290297741 +0000 UTC m=+36.410734285 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290322 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290339 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290352 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290391 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:43.290377053 +0000 UTC m=+36.410813497 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290492 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290505 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290514 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290544 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:43.290535387 +0000 UTC m=+36.410971831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.306543 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.314235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.314304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.314315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.314329 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.314339 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.339694 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.379501 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.390922 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/96856bc5-b4b0-4268-8868-65a584408ca7-serviceca\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.390998 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2t5v\" (UniqueName: \"kubernetes.io/projected/96856bc5-b4b0-4268-8868-65a584408ca7-kube-api-access-d2t5v\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.391022 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96856bc5-b4b0-4268-8868-65a584408ca7-host\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.391067 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96856bc5-b4b0-4268-8868-65a584408ca7-host\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.391784 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/96856bc5-b4b0-4268-8868-65a584408ca7-serviceca\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.416250 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.416288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.416297 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.416312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.416321 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.420635 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.460400 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2t5v\" (UniqueName: \"kubernetes.io/projected/96856bc5-b4b0-4268-8868-65a584408ca7-kube-api-access-d2t5v\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.486790 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.508272 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:45:30.214005401 +0000 UTC Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.519186 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.519215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.519225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.519240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.519252 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.529158 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.551683 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.565884 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.565995 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.566046 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.566164 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:35 crc kubenswrapper[4835]: W0201 07:22:35.566653 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96856bc5_b4b0_4268_8868_65a584408ca7.slice/crio-1fb0b190274bb3d2a6c6fe8d462824c4fa7dc16841981476b9bb5cb8d0687ef1 WatchSource:0}: Error finding container 1fb0b190274bb3d2a6c6fe8d462824c4fa7dc16841981476b9bb5cb8d0687ef1: Status 404 returned error can't find the container with id 1fb0b190274bb3d2a6c6fe8d462824c4fa7dc16841981476b9bb5cb8d0687ef1 Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.587281 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.611960 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.623636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.623670 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.623679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.623692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.623700 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.639109 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.679102 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.715530 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.725368 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.725403 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.725428 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.725441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.725450 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.745500 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l7rwg" event={"ID":"96856bc5-b4b0-4268-8868-65a584408ca7","Type":"ContainerStarted","Data":"1fb0b190274bb3d2a6c6fe8d462824c4fa7dc16841981476b9bb5cb8d0687ef1"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.746845 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.746901 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.746915 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"36357963120e59ecd5d22213f9bd0b316d664159db2bb8a508d29de03da3fb3a"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.749433 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.749534 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.749591 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.749651 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.749726 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.749782 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.750726 4835 generic.go:334] "Generic (PLEG): container finished" podID="00cf5926-f943-44c0-a351-db83ab17c2a1" containerID="747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585" exitCode=0 Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.750770 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerDied","Data":"747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.759070 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.795306 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.831855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.831895 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.831906 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.831919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.831927 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.838834 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.879793 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.915637 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.934571 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.934609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.934620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.934640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.934653 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.960728 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.002712 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.037890 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.038193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.038211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.038237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.038254 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.038656 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.080316 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.129780 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.140746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.140852 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.140880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.140914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.140952 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.155726 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.205329 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.239540 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.243658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.243681 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.243688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.243701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.243709 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.293628 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.323730 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.345783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.345807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.345815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.345828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.345836 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.447984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.448031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.448042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.448058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.448070 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.509160 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:41:33.310402716 +0000 UTC Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.550687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.550714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.550724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.550742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.550755 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.566688 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:36 crc kubenswrapper[4835]: E0201 07:22:36.566862 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.653758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.653823 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.653841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.653868 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.653892 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.764123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.764177 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.764206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.764232 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.764255 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.769480 4835 generic.go:334] "Generic (PLEG): container finished" podID="00cf5926-f943-44c0-a351-db83ab17c2a1" containerID="ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f" exitCode=0 Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.769687 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerDied","Data":"ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.774736 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l7rwg" event={"ID":"96856bc5-b4b0-4268-8868-65a584408ca7","Type":"ContainerStarted","Data":"1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.794795 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.821362 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.848069 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.867584 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.871169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.871199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.871212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.871231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.871242 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.887549 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.909845 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.934210 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.951747 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.971344 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.974496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.974544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.974565 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.974589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.974606 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.986142 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.001482 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.021004 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.037967 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.064114 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.078130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.078187 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.078215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.078250 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.078272 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.083944 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.102396 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.126167 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.142467 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.160546 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.179293 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.181628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.181674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.181691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.181717 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.181733 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.198540 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.214493 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.246030 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.285688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.285966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.286099 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.286283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.286401 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.286820 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.324794 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.329596 4835 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.389301 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.389364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.389381 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.389406 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.389451 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.495035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.495097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.495115 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.495143 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.495164 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.509598 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:59:58.901672431 +0000 UTC Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.566206 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.566447 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:37 crc kubenswrapper[4835]: E0201 07:22:37.566504 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:37 crc kubenswrapper[4835]: E0201 07:22:37.566834 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.598317 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.598376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.598391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.598430 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.598450 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.700909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.700969 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.700987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.701014 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.701035 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.785471 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.789822 4835 generic.go:334] "Generic (PLEG): container finished" podID="00cf5926-f943-44c0-a351-db83ab17c2a1" containerID="7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6" exitCode=0 Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.789931 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerDied","Data":"7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.803478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.803521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.803532 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.803551 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.803567 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.906178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.906250 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.906275 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.906308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.906346 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.010619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.010659 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.010667 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.010682 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.010691 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.113019 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.113064 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.113075 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.113090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.113101 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.216490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.216551 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.216568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.216601 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.216619 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.319430 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.319478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.319498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.319520 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.319535 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.342891 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.361563 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.373004 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.384304 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.405785 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.425671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.426054 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.426067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.426087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.426101 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.431734 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.454305 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.474814 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.494184 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.507985 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.510033 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:22:23.488858028 +0000 UTC Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.527852 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.527883 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.527895 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.527913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.527926 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.530593 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.544710 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.562174 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.565751 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:38 crc kubenswrapper[4835]: E0201 07:22:38.565868 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.575933 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.593079 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.618955 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.630878 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.630927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.630939 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.630956 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.630969 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.633320 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.657678 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.677228 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.695210 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.712899 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.727689 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.736093 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.736160 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.736183 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.736280 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.736311 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.740147 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.759318 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.777142 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.797354 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.800185 4835 generic.go:334] "Generic (PLEG): container finished" podID="00cf5926-f943-44c0-a351-db83ab17c2a1" containerID="1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341" exitCode=0 Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.800252 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerDied","Data":"1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.815537 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.839515 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.840046 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.840077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.840086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.840110 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.840121 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.860850 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.897033 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.913335 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.932144 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.943572 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.943613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.943622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.943636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.943645 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.950585 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.970275 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.993994 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.025594 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.046535 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.047543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.047611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.047634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.047692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.047718 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.065257 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.082377 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.115936 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.134167 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.151094 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.151176 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.151200 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.151234 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.151256 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.152912 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.169970 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.231936 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.248080 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.253501 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.253535 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.253546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.253563 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.253576 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.356699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.356755 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.356774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.356798 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.356816 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.459964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.460026 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.460043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.460073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.460091 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.510788 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:35:22.297920587 +0000 UTC Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.562694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.562743 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.562756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.562774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.562787 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.566298 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:39 crc kubenswrapper[4835]: E0201 07:22:39.566500 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.566562 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:39 crc kubenswrapper[4835]: E0201 07:22:39.566730 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.665733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.665810 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.665826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.665855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.665874 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.770055 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.770436 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.770454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.770478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.770496 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.808498 4835 generic.go:334] "Generic (PLEG): container finished" podID="00cf5926-f943-44c0-a351-db83ab17c2a1" containerID="8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85" exitCode=0 Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.808573 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerDied","Data":"8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.836283 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.859316 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.874593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.874669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.874692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.874723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.874744 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.883067 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.905763 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.923887 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.938814 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.951854 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.973273 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.983619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.983666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.983693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.983723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.983742 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.991734 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.007920 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.026469 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.054717 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.068448 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.087301 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.088753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.088772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.088780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.088793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.088802 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.191564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.191618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.191641 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.191672 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.191698 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.293782 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.293832 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.293848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.293870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.293884 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.396395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.396503 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.396526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.396556 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.396578 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.499957 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.500031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.500055 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.500083 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.500102 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.511771 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:05:54.897002703 +0000 UTC Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.566670 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:40 crc kubenswrapper[4835]: E0201 07:22:40.566897 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.603259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.603326 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.603344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.603598 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.603646 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.706855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.706921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.706939 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.706963 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.706982 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.810255 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.810311 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.810327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.810349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.810366 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.818753 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.819014 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.826140 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerStarted","Data":"9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.854780 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.887124 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.896243 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.914483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.914539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.914555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.914582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.914601 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.916899 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.938847 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.960933 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.985535 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.008083 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.017058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.017152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.017171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.017196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.017215 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.028046 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.048705 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.067085 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.090293 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.112952 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.119907 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.119948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.119964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.119986 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.120003 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.134977 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.152540 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.175516 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.194960 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.213049 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.223544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.223603 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.223620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.223647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.223672 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.246052 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.264054 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.282983 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.302387 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.323338 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.326533 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.326586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.326603 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.326633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.326650 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.354961 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.380682 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.403308 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.422789 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.428869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.428930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.428947 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.428973 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.428990 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.439855 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.460171 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.512862 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:20:08.975271133 +0000 UTC Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.532647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.532708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.532726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.532753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.532771 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.566311 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:41 crc kubenswrapper[4835]: E0201 07:22:41.566518 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.566631 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:41 crc kubenswrapper[4835]: E0201 07:22:41.566824 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.635652 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.635723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.635740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.635765 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.635782 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.739486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.739554 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.739594 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.739620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.739638 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.830551 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.831219 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.842605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.842663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.842688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.842715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.842738 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.866903 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.882832 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.900495 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.915872 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.934800 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.945377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.945497 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.945517 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.945541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.945560 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.958384 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.979270 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.998712 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.035371 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:42Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.048661 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.048716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.048730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.048751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.048766 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.056108 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:42Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.081831 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:42Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.105529 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:42Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.121245 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:42Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.154980 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:42Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.181645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.181692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.181704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.181727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.181739 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.195873 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:42Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.284230 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.284314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.284339 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.284371 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.284396 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.387835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.387892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.387909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.387933 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.387953 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.492941 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.492987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.492996 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.493010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.493021 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.513265 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:17:44.327435246 +0000 UTC Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.566289 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:42 crc kubenswrapper[4835]: E0201 07:22:42.566623 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.596761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.596820 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.596837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.596860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.596879 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.700142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.700193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.700206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.700229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.700247 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.803444 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.803503 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.803523 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.803544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.803559 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.834116 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.906380 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.906445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.906458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.906478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.906496 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.009675 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.009740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.009764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.009796 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.009818 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.112888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.112960 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.112995 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.113024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.113044 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.216679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.216727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.216738 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.216758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.216771 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.275819 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.275993 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:22:59.275970727 +0000 UTC m=+52.396407171 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.323807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.323901 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.323923 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.323949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.324149 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.376971 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.377022 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.377048 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.377082 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377175 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377200 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377220 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377231 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377252 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:59.377231464 +0000 UTC m=+52.497667908 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377279 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:59.377267034 +0000 UTC m=+52.497703588 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377291 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377330 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:59.377312386 +0000 UTC m=+52.497748930 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377383 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377393 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377401 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377448 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:59.377440739 +0000 UTC m=+52.497877263 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.412097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.412178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.412193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.412214 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.412228 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.425699 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.429549 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.429582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.429593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.429608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.429621 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.442655 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.450622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.450814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.450835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.450886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.450904 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.469477 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.473686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.473740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.473754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.473772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.473782 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.486668 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.492618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.492786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.492833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.492869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.492894 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.506666 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.506814 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.508356 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.508382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.508393 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.508442 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.508459 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.513842 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:22:58.689958792 +0000 UTC Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.566579 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.566606 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.566789 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.566900 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.611795 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.611871 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.611894 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.611919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.611936 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.714332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.714374 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.714384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.714402 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.714429 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.817316 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.817376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.817396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.817445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.817463 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.840082 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/0.log" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.844157 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34" exitCode=1 Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.844205 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.845295 4835 scope.go:117] "RemoveContainer" containerID="eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.870917 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.891597 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.908969 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.920488 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.920516 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.920525 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.920539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.920548 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.924028 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.950011 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:42Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:42.866826 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:42.866851 6156 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:22:42.866869 6156 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 07:22:42.866878 6156 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 07:22:42.866911 6156 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:42.866931 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:22:42.866943 6156 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 07:22:42.867442 6156 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:42.867480 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:22:42.867489 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:22:42.867525 6156 factory.go:656] Stopping watch factory\\\\nI0201 07:22:42.867542 6156 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:22:42.867569 6156 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:42.867595 6156 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.962748 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.979847 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.997240 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.016352 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.023292 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.023344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.023360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.023383 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.023399 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.033492 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.046832 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.062596 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.078441 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.087780 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.125326 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.125356 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.125364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.125376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.125384 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.228021 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.228068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.228086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.228105 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.228117 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.331301 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.331374 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.331398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.331472 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.331498 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.434735 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.434784 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.434801 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.434823 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.434840 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.514250 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 16:40:49.405608474 +0000 UTC Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.537080 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.537121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.537134 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.537150 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.537163 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.566756 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:44 crc kubenswrapper[4835]: E0201 07:22:44.566888 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.640189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.640231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.640242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.640259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.640271 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.742858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.742894 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.742902 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.742916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.742926 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.845147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.845188 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.845198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.845213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.845224 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.848085 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/0.log" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.850258 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.850369 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.867840 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.881771 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.892516 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.907148 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.920983 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.932935 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.943935 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.947561 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.947605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.947617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.947634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.947646 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.962557 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:42Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:42.866826 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:42.866851 6156 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:22:42.866869 6156 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 07:22:42.866878 6156 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 07:22:42.866911 6156 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:42.866931 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:22:42.866943 6156 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 07:22:42.867442 6156 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:42.867480 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:22:42.867489 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:22:42.867525 6156 factory.go:656] Stopping watch factory\\\\nI0201 07:22:42.867542 6156 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:22:42.867569 6156 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:42.867595 6156 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.972268 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.982698 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.992560 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.007472 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.019716 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.034244 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.050777 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.050828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.050845 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.050867 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.050884 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.153519 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.153550 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.153562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.153577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.153588 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.255994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.256066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.256090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.256121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.256143 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.359467 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.359536 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.359559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.359590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.359611 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.462434 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.462472 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.462483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.462498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.462509 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.515007 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:04:28.554137184 +0000 UTC Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.565767 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.565753 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:45 crc kubenswrapper[4835]: E0201 07:22:45.565920 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.565885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: E0201 07:22:45.566039 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.566065 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.566108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.566137 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.566160 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.668899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.668953 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.668971 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.668994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.669016 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.772889 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.772967 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.772991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.773023 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.773046 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.856371 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/1.log" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.857255 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/0.log" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.862859 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732" exitCode=1 Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.862916 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.862983 4835 scope.go:117] "RemoveContainer" containerID="eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.864170 4835 scope.go:117] "RemoveContainer" containerID="31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732" Feb 01 07:22:45 crc kubenswrapper[4835]: E0201 07:22:45.864578 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.876808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.876855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.876874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.876926 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.876944 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.887206 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.907858 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.928884 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.953061 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.972278 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.979922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.979969 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.979985 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.980008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.980028 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.991052 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.009629 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.025146 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.042572 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.063472 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.076161 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf"] Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.076923 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.081738 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.081886 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.082752 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.082808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.082833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.082864 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.082889 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.085090 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.106493 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.129799 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:42Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:42.866826 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:42.866851 6156 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:22:42.866869 6156 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 07:22:42.866878 6156 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 07:22:42.866911 6156 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:42.866931 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:22:42.866943 6156 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 07:22:42.867442 6156 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:42.867480 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:22:42.867489 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:22:42.867525 6156 factory.go:656] Stopping watch factory\\\\nI0201 07:22:42.867542 6156 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:22:42.867569 6156 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:42.867595 6156 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.141944 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.155176 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.155953 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.170272 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.186633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.186701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.186722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.186748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.186772 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.190254 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.207686 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.208509 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.208614 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.208761 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.208883 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6kkn\" (UniqueName: \"kubernetes.io/projected/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-kube-api-access-k6kkn\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.227603 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.257667 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:42Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:42.866826 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:42.866851 6156 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:22:42.866869 6156 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 07:22:42.866878 6156 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 07:22:42.866911 6156 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:42.866931 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:22:42.866943 6156 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 07:22:42.867442 6156 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:42.867480 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:22:42.867489 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:22:42.867525 6156 factory.go:656] Stopping watch factory\\\\nI0201 07:22:42.867542 6156 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:22:42.867569 6156 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:42.867595 6156 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.275311 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.290155 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.290210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.290227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.290252 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.290273 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.298513 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.310294 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.310364 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.310455 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6kkn\" (UniqueName: \"kubernetes.io/projected/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-kube-api-access-k6kkn\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.310507 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.311713 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.311871 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.319467 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.319905 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.320329 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.339465 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.344396 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6kkn\" (UniqueName: \"kubernetes.io/projected/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-kube-api-access-k6kkn\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.360227 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.377345 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.393731 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.393835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.393858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.393887 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.393913 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.400233 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.400301 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.423644 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: W0201 07:22:46.425209 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97c5a8c8_51ec_4c9b_9334_1c059fce5ee2.slice/crio-2da9a37effcdf926ab08b87e3ef5413f100192fd04ba939bcbc7e980d128e702 WatchSource:0}: Error finding container 2da9a37effcdf926ab08b87e3ef5413f100192fd04ba939bcbc7e980d128e702: Status 404 returned error can't find the container with id 2da9a37effcdf926ab08b87e3ef5413f100192fd04ba939bcbc7e980d128e702 Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.444320 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.464862 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.485931 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.497531 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.497606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.497629 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.497661 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.497686 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.515126 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:34:52.9349636 +0000 UTC Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.516390 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:42Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:42.866826 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:42.866851 6156 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:22:42.866869 6156 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 07:22:42.866878 6156 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 07:22:42.866911 6156 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:42.866931 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:22:42.866943 6156 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 07:22:42.867442 6156 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:42.867480 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:22:42.867489 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:22:42.867525 6156 factory.go:656] Stopping watch factory\\\\nI0201 07:22:42.867542 6156 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:22:42.867569 6156 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:42.867595 6156 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.532345 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.552375 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.566801 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:46 crc kubenswrapper[4835]: E0201 07:22:46.566968 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.570248 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.587105 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.600457 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.600515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.600533 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.600557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.600577 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.606510 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.620378 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.640312 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.662172 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.679375 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.697583 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.702772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.702809 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.702825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.702848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.702865 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.716194 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.764548 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.805207 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.805263 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.805283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.805304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.805319 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.869628 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/1.log" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.876772 4835 scope.go:117] "RemoveContainer" containerID="31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732" Feb 01 07:22:46 crc kubenswrapper[4835]: E0201 07:22:46.877046 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.877642 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" event={"ID":"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2","Type":"ContainerStarted","Data":"9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.877701 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" event={"ID":"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2","Type":"ContainerStarted","Data":"bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.877714 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" event={"ID":"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2","Type":"ContainerStarted","Data":"2da9a37effcdf926ab08b87e3ef5413f100192fd04ba939bcbc7e980d128e702"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.898043 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.909564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.909630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.909654 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.909684 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.909705 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.915863 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.940191 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.974047 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.989879 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.005606 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.012473 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.012546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.012563 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.012587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.012605 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.021602 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.033869 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.053201 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.072616 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.093764 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.108756 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.116235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.116277 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.116292 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.116320 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.116337 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.125508 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.142551 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.161479 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.173855 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.190586 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.208331 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.219853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.219919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.219936 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.219961 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.219983 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.243559 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.264327 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.285901 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.299769 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.315003 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.322154 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.322218 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.322243 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.322270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.322289 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.329706 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.341082 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.358131 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.372447 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.385908 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.398481 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.412034 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.424980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.425025 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.425037 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.425053 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.425065 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.515273 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:49:56.044083909 +0000 UTC Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.527447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.527504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.527520 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.527548 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.527565 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.566163 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:47 crc kubenswrapper[4835]: E0201 07:22:47.566335 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.566407 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:47 crc kubenswrapper[4835]: E0201 07:22:47.566684 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.590404 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.599363 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2msm5"] Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.600112 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:47 crc kubenswrapper[4835]: E0201 07:22:47.600207 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.610914 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.631016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.631066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.631083 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.631106 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.631123 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.632985 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.666258 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.686952 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.706046 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.727236 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.727293 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tthdk\" (UniqueName: \"kubernetes.io/projected/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-kube-api-access-tthdk\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.730952 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.734692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.734757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.734781 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.734814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.734838 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.745539 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.763044 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.787751 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.810447 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.827848 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.828100 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:47 crc kubenswrapper[4835]: E0201 07:22:47.828230 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.828480 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tthdk\" (UniqueName: \"kubernetes.io/projected/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-kube-api-access-tthdk\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:47 crc kubenswrapper[4835]: E0201 07:22:47.828531 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:22:48.328499397 +0000 UTC m=+41.448935861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.840149 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.840201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.840217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.840244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.840262 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.853167 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.860854 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthdk\" (UniqueName: \"kubernetes.io/projected/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-kube-api-access-tthdk\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.867886 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.882667 4835 scope.go:117] "RemoveContainer" containerID="31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732" Feb 01 07:22:47 crc kubenswrapper[4835]: E0201 07:22:47.882919 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.916148 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.930595 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.943350 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.943400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.943445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.943470 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.943487 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.948167 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.966214 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.995737 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.015576 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.040872 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.046729 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.046784 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.046802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.046828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.046846 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.061889 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.081484 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.099219 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.117773 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.136319 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.151012 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.151099 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.151119 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.151151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.151176 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.160602 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.180893 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.206096 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.226855 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.245225 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.254398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.254498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.254519 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.254545 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.254565 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.338578 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:48 crc kubenswrapper[4835]: E0201 07:22:48.338760 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:48 crc kubenswrapper[4835]: E0201 07:22:48.338842 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:22:49.338820203 +0000 UTC m=+42.459256667 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.356741 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.356792 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.356807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.356832 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.356850 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.464812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.464878 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.464898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.464926 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.464947 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.515583 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:40:21.51002953 +0000 UTC Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.566186 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:48 crc kubenswrapper[4835]: E0201 07:22:48.566353 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.568391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.568465 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.568482 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.568502 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.568520 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.671161 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.671220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.671239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.671264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.671284 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.774461 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.774511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.774528 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.774550 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.774568 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.877264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.877310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.877323 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.877338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.877349 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.981155 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.981482 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.981498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.981515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.981526 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.084398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.084496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.084516 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.084542 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.084560 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.187248 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.187334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.187362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.187388 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.187442 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.292717 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.292770 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.292783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.292801 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.292814 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.349375 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:49 crc kubenswrapper[4835]: E0201 07:22:49.349592 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:49 crc kubenswrapper[4835]: E0201 07:22:49.349699 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:22:51.3496762 +0000 UTC m=+44.470112644 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.395261 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.395323 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.395340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.395365 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.395385 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.497934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.497996 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.498014 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.498068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.498098 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.516168 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:04:05.044000037 +0000 UTC Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.565838 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:49 crc kubenswrapper[4835]: E0201 07:22:49.565969 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.566476 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:49 crc kubenswrapper[4835]: E0201 07:22:49.566558 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.566614 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:49 crc kubenswrapper[4835]: E0201 07:22:49.566676 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.600682 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.600736 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.600758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.600783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.600800 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.703672 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.703723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.703740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.703762 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.703779 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.806628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.806686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.806706 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.806731 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.806749 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.909026 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.909091 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.909109 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.909133 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.909152 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.012404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.012496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.012515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.012541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.012561 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.114768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.114831 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.114848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.114873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.114890 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.216966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.217018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.217035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.217055 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.217071 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.319674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.319728 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.319745 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.319772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.319788 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.422846 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.422905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.422922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.422948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.422965 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.517076 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:44:04.365550946 +0000 UTC Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.525495 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.525707 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.525841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.525974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.526141 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.566065 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:50 crc kubenswrapper[4835]: E0201 07:22:50.566565 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.629081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.629142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.629158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.629183 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.629200 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.731763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.731829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.731847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.731876 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.731894 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.835084 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.835161 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.835184 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.835213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.835235 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.938688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.938781 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.938804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.938835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.938857 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.042621 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.042698 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.042720 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.042753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.042787 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.145446 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.145500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.145517 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.145540 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.145556 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.248913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.248980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.249000 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.249030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.249048 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.351818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.351878 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.351894 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.351915 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.351934 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.372651 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:51 crc kubenswrapper[4835]: E0201 07:22:51.372865 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:51 crc kubenswrapper[4835]: E0201 07:22:51.372939 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:22:55.37291799 +0000 UTC m=+48.493354464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.454506 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.454562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.454579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.454600 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.454618 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.518880 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:33:26.083762835 +0000 UTC Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.557770 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.557836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.557858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.557887 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.557912 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.566173 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.566305 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.566445 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:51 crc kubenswrapper[4835]: E0201 07:22:51.566610 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:51 crc kubenswrapper[4835]: E0201 07:22:51.566835 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:51 crc kubenswrapper[4835]: E0201 07:22:51.567387 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.661342 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.661405 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.661447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.661472 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.661490 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.764392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.764474 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.764494 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.764519 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.764537 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.867470 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.867515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.867532 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.867560 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.867577 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.970209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.970283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.970302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.970327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.970347 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.073607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.073694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.073710 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.073734 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.073754 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.177509 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.177576 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.177598 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.177626 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.177648 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.281546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.281615 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.281640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.281668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.281689 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.384311 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.384378 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.384395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.384470 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.384532 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.487905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.488212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.488349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.488705 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.488878 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.519511 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:23:41.608144647 +0000 UTC Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.565815 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:52 crc kubenswrapper[4835]: E0201 07:22:52.566008 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.591773 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.591824 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.591841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.591864 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.591881 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.694151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.694197 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.694213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.694237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.694253 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.797194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.797288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.797348 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.797371 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.797455 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.900914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.901081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.901113 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.901192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.901221 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.004215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.004281 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.004298 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.004322 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.004340 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.107117 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.107162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.107179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.107220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.107241 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.210096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.210158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.210175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.210196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.210213 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.313289 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.313361 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.313387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.313455 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.313483 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.417175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.417258 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.417284 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.417315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.417337 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.519620 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:45:35.682345378 +0000 UTC Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.520500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.520572 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.520590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.520619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.520642 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.565764 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.565810 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.565831 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.565980 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.566180 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.566326 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.624633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.624720 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.624744 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.624778 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.624798 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.727761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.727817 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.727834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.727858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.727875 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.734637 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.734688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.734704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.734724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.734739 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.755068 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:53Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.760265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.760331 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.760349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.760373 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.760391 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.782663 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:53Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.788363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.788488 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.788517 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.788547 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.788574 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.810680 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:53Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.815765 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.815808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.815820 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.815838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.815850 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.833700 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:53Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.839483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.839519 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.839530 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.839546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.839558 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.873299 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:53Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.873643 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.877290 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.877330 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.877343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.877362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.877377 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.980754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.980822 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.980840 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.980870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.980889 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.084113 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.084185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.084203 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.084229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.084251 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.187054 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.187106 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.187124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.187147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.187163 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.290325 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.290395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.290453 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.290486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.290505 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.393166 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.393239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.393262 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.393297 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.393321 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.496084 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.496153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.496178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.496209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.496231 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.519991 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:28:14.437561785 +0000 UTC Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.566787 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:54 crc kubenswrapper[4835]: E0201 07:22:54.567003 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.599260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.599342 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.599364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.599390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.599441 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.702130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.702195 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.702217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.702242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.702259 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.805839 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.805908 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.805928 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.805953 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.805978 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.909212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.909262 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.909278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.909300 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.909317 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.012694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.012761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.012783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.012811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.012832 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.115188 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.115260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.115280 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.115305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.115322 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.218635 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.218702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.218722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.218751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.218769 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.321168 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.321235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.321251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.321276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.321293 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.424761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.424819 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.424836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.424860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.424878 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.425057 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:55 crc kubenswrapper[4835]: E0201 07:22:55.425247 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:55 crc kubenswrapper[4835]: E0201 07:22:55.425339 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:23:03.425314652 +0000 UTC m=+56.545751126 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.520405 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:58:04.603734667 +0000 UTC Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.527822 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.527885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.527902 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.527927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.527945 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.566707 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.566786 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.566959 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:55 crc kubenswrapper[4835]: E0201 07:22:55.567086 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:22:55 crc kubenswrapper[4835]: E0201 07:22:55.567205 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:55 crc kubenswrapper[4835]: E0201 07:22:55.567311 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.630582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.630624 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.630635 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.630651 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.630662 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.734456 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.734521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.734541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.734564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.734581 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.837098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.837176 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.837198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.837227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.837249 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.940526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.940596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.940615 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.940641 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.940660 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.043292 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.043351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.043368 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.043392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.043444 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.146349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.146404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.146464 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.146489 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.146506 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.249338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.249384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.249400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.249528 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.249547 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.352533 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.352607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.352639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.352668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.352690 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.455967 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.456023 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.456042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.456064 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.456080 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.521266 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:13:01.252542572 +0000 UTC Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.558852 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.558905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.558922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.558944 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.558960 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.566397 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:56 crc kubenswrapper[4835]: E0201 07:22:56.566612 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.662019 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.662073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.662089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.662113 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.662132 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.764532 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.764583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.764599 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.764622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.764637 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.866998 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.867088 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.867111 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.867142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.867169 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.970466 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.970532 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.970550 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.970574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.970592 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.073278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.073353 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.073379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.073444 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.073475 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.177140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.177210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.177230 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.177254 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.177271 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.280231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.280314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.280340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.280371 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.280395 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.383867 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.383912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.383923 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.383940 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.383957 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.487510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.487588 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.487611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.487636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.487653 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.522033 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:48:51.095426439 +0000 UTC Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.566067 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.566099 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:57 crc kubenswrapper[4835]: E0201 07:22:57.566335 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:57 crc kubenswrapper[4835]: E0201 07:22:57.566496 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.566584 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:57 crc kubenswrapper[4835]: E0201 07:22:57.566731 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.590530 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.590611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.590635 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.590665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.590689 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.600339 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.620489 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.638397 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.656810 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.690573 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.695584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.695644 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.695661 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.695686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.695704 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.708039 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.726557 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.746622 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.765181 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.784015 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.798631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.798683 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.798701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.798725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.798743 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.810107 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.831155 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.851586 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.873212 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.888844 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.901735 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.901805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.901830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.901860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.901885 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.907624 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.005509 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.005578 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.005602 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.005633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.005654 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.108562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.108622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.108641 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.108669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.108687 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.212312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.212376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.212389 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.212443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.212460 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.315925 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.315994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.316014 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.316043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.316067 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.419541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.419610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.419628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.419656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.419680 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.522276 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:55:58.65557722 +0000 UTC Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.522933 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.522970 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.522982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.522998 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.523011 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.565757 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:58 crc kubenswrapper[4835]: E0201 07:22:58.565944 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.567318 4835 scope.go:117] "RemoveContainer" containerID="31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.626299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.626704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.626725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.626754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.626778 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.729925 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.729980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.729997 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.730020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.730038 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.849564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.849612 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.849629 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.849665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.849684 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.927978 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/1.log" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.931364 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.931928 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.954054 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.954115 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.954145 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.954163 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.954174 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.956694 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:58Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.972698 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:58Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.987267 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:58Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.003779 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.022223 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.049472 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.057269 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.057307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.057321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.057340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.057356 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.073744 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.094097 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.127677 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.137498 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.148892 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.160221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.160277 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.160295 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.160318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.160337 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.163439 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.182756 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.205350 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.223458 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.237252 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.263381 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.263451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.263467 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.263484 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.263497 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.365702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.365764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.365785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.365811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.365830 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.374347 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.374648 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:23:31.37462319 +0000 UTC m=+84.495059664 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.468926 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.468993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.469015 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.469040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.469059 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.475875 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.475955 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.475998 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.476032 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476219 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476244 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476262 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476324 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:23:31.476303887 +0000 UTC m=+84.596740361 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476733 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476787 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476813 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476825 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476838 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:23:31.47680658 +0000 UTC m=+84.597243054 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476836 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476862 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:23:31.476850291 +0000 UTC m=+84.597286735 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476969 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:23:31.476941253 +0000 UTC m=+84.597377717 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.522931 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:09:21.587737161 +0000 UTC Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.566345 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.566470 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.566343 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.566608 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.566751 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.566880 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.571121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.571182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.571202 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.571228 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.571245 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.674327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.674374 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.674385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.674402 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.674439 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.778259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.778314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.778333 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.778355 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.778373 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.880905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.880982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.881003 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.881027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.881044 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.937780 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/2.log" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.938769 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/1.log" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.942322 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa" exitCode=1 Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.942388 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.942478 4835 scope.go:117] "RemoveContainer" containerID="31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.943333 4835 scope.go:117] "RemoveContainer" containerID="fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa" Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.943609 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.972819 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.983985 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.984050 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.984062 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.984078 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.984089 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.988988 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.999848 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.008965 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.021437 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.035494 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.048050 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.058052 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.080599 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.086341 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.086401 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.086443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.086468 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.086486 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.093189 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.104654 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.117650 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.133093 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.153244 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.176312 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.189285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.189370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.189392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.189498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.189534 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.197794 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.291964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.292032 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.292047 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.292069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.292084 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.395317 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.395379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.395396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.395477 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.395504 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.498172 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.498237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.498254 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.498278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.498296 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.523609 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:14:30.12398468 +0000 UTC Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.566153 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:00 crc kubenswrapper[4835]: E0201 07:23:00.566396 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.600854 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.600974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.600995 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.601020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.601038 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.704045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.704102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.704119 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.704142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.704159 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.806886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.806936 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.806956 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.806981 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.806998 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.910098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.910173 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.910196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.910224 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.910242 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.947947 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/2.log" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.953018 4835 scope.go:117] "RemoveContainer" containerID="fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa" Feb 01 07:23:00 crc kubenswrapper[4835]: E0201 07:23:00.953261 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.972267 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.003713 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.013497 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.013568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.013589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.013613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.013630 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.020632 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.038624 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.059144 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.077617 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.098917 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.115766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.115841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.115866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.115914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.115939 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.123242 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.143306 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.160598 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.176439 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.191798 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.206313 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.219348 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.219454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.219480 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.219510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.219535 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.222512 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.243894 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.262887 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.322255 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.322320 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.322338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.322361 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.322378 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.426139 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.426197 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.426214 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.426238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.426257 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.524622 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:38:17.888533582 +0000 UTC Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.529064 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.529155 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.529182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.529213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.529235 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.566395 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.566449 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.566402 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:01 crc kubenswrapper[4835]: E0201 07:23:01.566601 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:01 crc kubenswrapper[4835]: E0201 07:23:01.566803 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:01 crc kubenswrapper[4835]: E0201 07:23:01.566966 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.632931 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.632994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.633010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.633035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.633053 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.735829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.735927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.735945 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.735968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.735986 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.839241 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.839348 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.839377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.839494 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.839519 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.942596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.942704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.942762 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.942819 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.942838 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.046282 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.046322 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.046334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.046350 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.046362 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.149468 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.149528 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.149545 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.149569 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.149586 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.252946 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.253010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.253035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.253065 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.253088 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.356123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.356190 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.356210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.356235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.356251 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.459177 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.459240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.459262 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.459291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.459312 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.524999 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:30:56.261003884 +0000 UTC Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.562258 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.562321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.562338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.562361 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.562379 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.566584 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:02 crc kubenswrapper[4835]: E0201 07:23:02.566767 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.665658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.665736 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.665758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.665785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.665802 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.768664 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.768737 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.768761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.768795 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.768819 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.872304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.872486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.872523 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.872553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.872575 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.975363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.975462 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.975481 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.975508 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.975526 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.079478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.079566 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.079589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.079620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.079643 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.182074 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.182167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.182185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.182210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.182227 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.285751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.285841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.285863 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.285898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.285922 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.389709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.389772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.389795 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.389829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.389852 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.493312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.493377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.493395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.493454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.493474 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.515333 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:03 crc kubenswrapper[4835]: E0201 07:23:03.515629 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:23:03 crc kubenswrapper[4835]: E0201 07:23:03.515732 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:23:19.515705022 +0000 UTC m=+72.636141486 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.525967 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 19:44:26.850472174 +0000 UTC Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.566458 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.566576 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.566658 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:03 crc kubenswrapper[4835]: E0201 07:23:03.566783 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:03 crc kubenswrapper[4835]: E0201 07:23:03.567075 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:03 crc kubenswrapper[4835]: E0201 07:23:03.567165 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.595916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.595983 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.596003 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.596028 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.596045 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.699508 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.699651 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.699678 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.699709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.699733 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.802543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.802618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.802634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.802658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.802675 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.860749 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.878939 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.885674 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:03Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.906099 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:03Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.906774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.906826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.906844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.906868 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.906885 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.924987 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:03Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.957080 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:03Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.974361 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:03Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.993082 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:03Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.009527 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.009614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.009632 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.009656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.009678 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.012385 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.031594 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.051699 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.075826 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.078808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.078899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.078927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.078952 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.078972 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.097389 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: E0201 07:23:04.101655 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.106951 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.107002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.107024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.107050 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.107069 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.118554 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: E0201 07:23:04.127178 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.131794 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.131866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.131889 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.131919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.131940 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.135646 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.150168 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: E0201 07:23:04.151147 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.156724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.156789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.156805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.156827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.156843 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.168707 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: E0201 07:23:04.177587 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.182656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.182742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.182760 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.182786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.182806 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.189454 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: E0201 07:23:04.203622 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: E0201 07:23:04.204250 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.206991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.207036 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.207054 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.207088 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.207108 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.310547 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.310631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.310655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.310685 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.310710 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.414158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.414224 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.414240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.414263 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.414280 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.517686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.517742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.517757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.517779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.517796 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.526382 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:18:11.319727901 +0000 UTC Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.565899 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:04 crc kubenswrapper[4835]: E0201 07:23:04.566057 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.620246 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.620310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.620327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.620352 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.620371 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.724039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.724195 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.724220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.724245 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.724266 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.827665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.827739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.827761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.827791 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.827817 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.930069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.930123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.930140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.930165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.930184 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.033544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.033604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.033621 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.033643 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.033660 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.136355 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.136461 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.136486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.136517 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.136541 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.239606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.239678 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.239700 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.239732 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.239755 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.343036 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.343092 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.343117 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.343144 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.343165 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.447185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.447236 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.447256 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.447279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.447296 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.526718 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:04:46.050442486 +0000 UTC Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.551108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.551181 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.551206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.551238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.551263 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.566616 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.566622 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:05 crc kubenswrapper[4835]: E0201 07:23:05.566881 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.566623 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:05 crc kubenswrapper[4835]: E0201 07:23:05.567012 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:05 crc kubenswrapper[4835]: E0201 07:23:05.567191 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.653947 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.654007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.654024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.654048 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.654067 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.756529 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.756589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.756606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.756632 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.756652 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.860139 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.860203 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.860220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.860248 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.860264 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.963494 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.963542 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.963559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.963581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.963599 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.066836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.066891 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.066906 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.066927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.066941 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.170572 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.170638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.170655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.170680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.170698 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.273948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.274022 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.274045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.274076 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.274104 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.376815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.376876 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.376892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.376934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.376951 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.480212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.480262 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.480279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.480302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.480323 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.527843 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:30:43.940132067 +0000 UTC Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.566683 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:06 crc kubenswrapper[4835]: E0201 07:23:06.566881 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.583052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.583104 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.583121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.583144 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.583161 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.687027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.687081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.687097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.687140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.687158 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.790740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.790825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.790905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.790985 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.791007 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.894552 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.894618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.894638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.894666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.894687 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.997698 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.997760 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.997776 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.997800 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.997818 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.100711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.100773 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.100790 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.100816 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.100836 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.203679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.203739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.203757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.203782 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.203798 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.306209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.306239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.306247 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.306259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.306268 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.408929 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.409007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.409025 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.409047 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.409064 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.513842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.513892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.513909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.513934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.513953 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.528874 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:20:05.974152274 +0000 UTC Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.565758 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.565828 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:07 crc kubenswrapper[4835]: E0201 07:23:07.565984 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.566096 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:07 crc kubenswrapper[4835]: E0201 07:23:07.566317 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:07 crc kubenswrapper[4835]: E0201 07:23:07.566574 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.585287 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.613729 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.616321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.616369 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.616387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.616437 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.616455 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.632220 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.652299 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.673356 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.704844 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.718983 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.719158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.719273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.719382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.719513 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.724270 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.742498 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.762631 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.783388 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.806575 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.823315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.823380 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.823402 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.823469 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.823497 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.834130 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.854729 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.869948 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.887850 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.902649 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.918615 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.926450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.926501 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.926513 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.926531 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.926543 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.030013 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.030189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.030304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.030384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.030456 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.133058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.133158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.133177 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.133237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.133256 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.236708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.236788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.236811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.236843 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.236865 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.340609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.340669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.340688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.340712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.340730 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.443148 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.443218 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.443242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.443280 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.443304 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.530003 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:22:54.872935706 +0000 UTC Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.546490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.546566 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.546588 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.546619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.546639 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.566195 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:08 crc kubenswrapper[4835]: E0201 07:23:08.566359 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.650179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.650286 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.650310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.650346 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.650370 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.758200 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.758639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.758795 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.758952 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.759081 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.861723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.861799 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.861817 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.861844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.861864 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.964968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.965023 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.965042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.965069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.965086 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.068027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.068111 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.068136 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.068162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.068186 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.170993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.171391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.171587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.171717 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.171855 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.274994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.275073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.275097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.275124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.275145 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.378127 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.378185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.378201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.378226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.378242 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.481026 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.481264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.481404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.481604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.481744 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.530952 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:43:47.228509642 +0000 UTC Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.566664 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:09 crc kubenswrapper[4835]: E0201 07:23:09.567017 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.566798 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.566747 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:09 crc kubenswrapper[4835]: E0201 07:23:09.567700 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:09 crc kubenswrapper[4835]: E0201 07:23:09.567789 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.584278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.584341 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.584359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.584383 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.584401 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.694162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.694231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.694249 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.694272 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.694289 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.797691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.797766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.797803 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.797840 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.797864 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.900273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.900577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.900676 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.900777 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.900873 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.003750 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.003818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.003837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.003860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.003878 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.106839 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.106912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.106933 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.106962 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.106984 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.210063 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.210141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.210165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.210193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.210211 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.312094 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.312147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.312164 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.312186 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.312203 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.415458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.415493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.415504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.415520 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.415530 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.518725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.519065 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.519192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.519351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.519527 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.531351 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:30:43.513690384 +0000 UTC Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.565942 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:10 crc kubenswrapper[4835]: E0201 07:23:10.566153 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.622578 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.622644 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.622662 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.622687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.622704 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.725204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.725264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.725280 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.725308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.725324 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.828540 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.829060 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.829158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.829265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.829366 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.932986 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.933036 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.933051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.933071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.933084 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.035658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.035712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.035725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.035747 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.035763 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.138150 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.138207 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.138224 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.138246 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.138265 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.246011 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.246084 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.246103 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.246127 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.246147 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.349526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.349574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.349586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.349602 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.349614 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.452689 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.452731 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.452742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.452758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.452770 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.532340 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:28:49.059362321 +0000 UTC Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.555286 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.555333 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.555344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.555360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.555372 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.565810 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.565812 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:11 crc kubenswrapper[4835]: E0201 07:23:11.566114 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.566144 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:11 crc kubenswrapper[4835]: E0201 07:23:11.566372 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:11 crc kubenswrapper[4835]: E0201 07:23:11.566640 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.567559 4835 scope.go:117] "RemoveContainer" containerID="fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa" Feb 01 07:23:11 crc kubenswrapper[4835]: E0201 07:23:11.567892 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.659367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.659486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.659512 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.659549 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.659575 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.762534 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.762600 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.762620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.762645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.762662 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.865488 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.865561 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.865579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.865603 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.865620 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.968611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.968667 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.968684 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.968706 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.968723 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.072225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.072286 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.072302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.072326 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.072346 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.175167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.175214 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.175227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.175244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.175258 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.278796 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.278869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.278888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.278914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.278936 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.382079 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.382131 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.382144 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.382162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.382175 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.485304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.485350 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.485362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.485379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.485391 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.532796 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:34:05.416423956 +0000 UTC Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.566085 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:12 crc kubenswrapper[4835]: E0201 07:23:12.566233 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.587793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.587842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.587853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.587872 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.587886 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.690305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.690366 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.690391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.690445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.690463 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.792807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.792838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.792848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.792860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.792870 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.894289 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.894359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.894375 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.894400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.894445 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.997269 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.997314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.997323 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.997340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.997351 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.099900 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.099946 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.099955 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.099971 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.099982 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.202107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.202151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.202162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.202178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.202189 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.304855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.304901 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.304912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.304929 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.304939 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.407476 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.407524 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.407536 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.407555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.407568 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.510161 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.510201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.510211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.510229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.510242 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.533848 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:31:07.154502441 +0000 UTC Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.566185 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.566206 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.566247 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:13 crc kubenswrapper[4835]: E0201 07:23:13.566285 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:13 crc kubenswrapper[4835]: E0201 07:23:13.566560 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:13 crc kubenswrapper[4835]: E0201 07:23:13.566620 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.612197 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.612275 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.612298 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.612332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.612355 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.714817 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.714848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.714856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.714869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.714879 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.817687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.817799 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.817827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.817862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.817887 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.920687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.920746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.920764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.920788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.920805 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.023173 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.023221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.023241 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.023264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.023280 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.126590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.126656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.126679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.126710 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.126730 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.229487 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.229533 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.229565 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.229583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.229596 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.282701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.282751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.282768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.282793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.282810 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: E0201 07:23:14.296096 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:14Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.300013 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.300058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.300076 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.300098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.300115 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: E0201 07:23:14.317378 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:14Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.320774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.321007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.321220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.321454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.321668 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: E0201 07:23:14.334846 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:14Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.338500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.338569 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.338592 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.338619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.338640 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: E0201 07:23:14.352382 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:14Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.357090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.357164 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.357181 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.357203 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.357220 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: E0201 07:23:14.369279 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:14Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:14 crc kubenswrapper[4835]: E0201 07:23:14.369451 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.371124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.371315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.371493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.371704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.371892 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.474568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.474945 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.475145 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.475354 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.475573 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.534365 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:11:43.875353641 +0000 UTC Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.565862 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:14 crc kubenswrapper[4835]: E0201 07:23:14.566181 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.578548 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.578586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.578597 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.578611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.578622 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.680396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.680466 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.680479 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.680496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.680507 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.782910 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.783112 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.783194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.783275 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.783354 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.885316 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.885367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.885385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.885451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.885474 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.987777 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.988034 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.988161 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.988263 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.988340 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.090583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.090614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.090624 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.090638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.090647 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.194193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.194229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.194242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.194257 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.194267 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.298233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.298385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.298401 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.298443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.298463 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.401152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.401198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.401209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.401225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.401236 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.503766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.503800 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.503812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.503827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.503837 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.535671 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:49:33.645495562 +0000 UTC Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.566354 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:15 crc kubenswrapper[4835]: E0201 07:23:15.566470 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.566480 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.566575 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:15 crc kubenswrapper[4835]: E0201 07:23:15.566742 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:15 crc kubenswrapper[4835]: E0201 07:23:15.566844 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.605694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.605724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.605733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.605747 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.605758 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.707659 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.707727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.707751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.707775 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.707791 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.809886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.809954 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.809977 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.810005 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.810027 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.912002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.912081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.912102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.912128 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.912147 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.014967 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.015017 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.015031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.015049 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.015063 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.118057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.118130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.118146 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.118172 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.118190 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.221806 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.221858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.221874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.221895 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.221908 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.324137 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.324199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.324220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.324247 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.324268 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.426215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.426285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.426305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.426329 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.426347 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.528389 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.528468 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.528485 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.528505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.528521 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.536018 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:42:30.301849078 +0000 UTC Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.566618 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:16 crc kubenswrapper[4835]: E0201 07:23:16.566800 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.631863 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.631909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.631921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.631936 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.631946 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.734392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.734450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.734462 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.734475 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.734485 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.837608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.837649 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.837662 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.837679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.837691 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.940513 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.940540 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.940552 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.940567 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.940576 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.043676 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.043719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.043730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.043745 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.043756 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.145651 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.145679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.145690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.145702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.145713 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.248080 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.248143 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.248164 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.248191 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.248215 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.350394 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.350433 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.350441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.350452 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.350461 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.453199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.453229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.453241 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.453256 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.453266 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.536955 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 02:25:51.29653789 +0000 UTC Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.555720 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.555750 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.555763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.555781 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.555792 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.566202 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.566257 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:17 crc kubenswrapper[4835]: E0201 07:23:17.566373 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.566471 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:17 crc kubenswrapper[4835]: E0201 07:23:17.566584 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:17 crc kubenswrapper[4835]: E0201 07:23:17.566651 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.580926 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.602788 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.616188 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.630666 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.643964 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.656890 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.658097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.658138 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.658152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.658171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.658182 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.667763 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.679233 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.693721 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.711030 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.735838 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.749777 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.761159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.761217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.761240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.761268 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.761289 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.766879 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.789405 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.813211 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.833975 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.854163 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.864207 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.864235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.864244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.864259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.864270 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.966514 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.966573 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.966590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.966613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.966633 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.069377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.069438 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.069450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.069463 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.069473 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.172631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.172674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.172683 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.172698 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.172707 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.275668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.275725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.275745 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.275771 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.275789 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.378870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.378914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.378922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.378938 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.378947 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.481246 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.481305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.481325 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.481349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.481367 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.537468 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:39:46.54599595 +0000 UTC Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.565753 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:18 crc kubenswrapper[4835]: E0201 07:23:18.565915 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.586478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.586512 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.586521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.586535 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.586545 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.689310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.689372 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.689392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.689442 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.689461 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.791587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.791621 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.791629 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.791642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.791651 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.893558 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.893616 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.893633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.893656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.893675 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.996279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.996337 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.996352 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.996375 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.996390 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.098648 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.098715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.098738 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.098760 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.098776 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.201366 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.201398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.201419 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.201431 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.201440 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.304697 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.304739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.304747 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.304761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.304770 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.406944 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.406988 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.406996 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.407012 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.407025 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.509812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.509845 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.509853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.509865 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.509873 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.538238 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:16:27.286561508 +0000 UTC Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.566756 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.566796 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:19 crc kubenswrapper[4835]: E0201 07:23:19.566888 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.566924 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:19 crc kubenswrapper[4835]: E0201 07:23:19.567098 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:19 crc kubenswrapper[4835]: E0201 07:23:19.567189 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.586379 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:19 crc kubenswrapper[4835]: E0201 07:23:19.586557 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:23:19 crc kubenswrapper[4835]: E0201 07:23:19.586615 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:23:51.586599078 +0000 UTC m=+104.707035512 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.611998 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.612041 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.612053 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.612069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.612084 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.714989 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.715043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.715061 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.715086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.715105 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.818751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.818805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.818826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.818856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.818881 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.921008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.921101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.921114 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.921134 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.921152 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.023905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.023939 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.023950 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.023965 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.023977 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.126384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.126443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.126457 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.126473 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.126483 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.228594 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.228636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.228648 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.228662 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.228670 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.330919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.330963 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.330971 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.330986 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.330995 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.433128 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.433159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.433170 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.433183 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.433192 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.535905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.535955 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.535972 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.535997 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.536014 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.538958 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:27:25.172851841 +0000 UTC Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.566616 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:20 crc kubenswrapper[4835]: E0201 07:23:20.566740 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.638536 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.638605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.638628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.638652 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.638669 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.741665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.741711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.741722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.741768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.741778 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.844406 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.844462 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.844470 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.844484 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.844493 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.946683 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.946736 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.946748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.946766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.946783 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.020099 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/0.log" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.020169 4835 generic.go:334] "Generic (PLEG): container finished" podID="c9342eb7-b5ae-47b2-a56d-91ae886e5f0e" containerID="213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd" exitCode=1 Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.020208 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerDied","Data":"213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.020727 4835 scope.go:117] "RemoveContainer" containerID="213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.037644 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.050199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.050264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.050283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.050308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.050326 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.053355 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.071354 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.088862 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.110552 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.131553 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.147194 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.153362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.153405 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.153440 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.153461 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.153474 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.162397 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.175600 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.189281 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.202366 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.221284 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.231252 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.243558 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.255142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.255198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.255207 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.255220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.255230 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.257013 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.274793 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.294970 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:20Z\\\",\\\"message\\\":\\\"2026-02-01T07:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d\\\\n2026-02-01T07:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d to /host/opt/cni/bin/\\\\n2026-02-01T07:22:34Z [verbose] multus-daemon started\\\\n2026-02-01T07:22:34Z [verbose] Readiness Indicator file check\\\\n2026-02-01T07:23:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.357486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.357539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.357555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.357581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.357598 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.460139 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.460206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.460226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.460251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.460268 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.539319 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:54:24.876268939 +0000 UTC Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.562772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.562831 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.562840 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.562855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.562883 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.566287 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:21 crc kubenswrapper[4835]: E0201 07:23:21.566404 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.566485 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:21 crc kubenswrapper[4835]: E0201 07:23:21.566603 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.566294 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:21 crc kubenswrapper[4835]: E0201 07:23:21.566718 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.665590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.665654 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.665671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.665695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.665716 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.768176 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.768265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.768288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.768319 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.768342 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.872109 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.872153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.872169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.872194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.872211 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.975959 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.976038 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.976060 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.976089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.976106 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.034583 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/0.log" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.034638 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerStarted","Data":"c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.059080 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.075531 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.079127 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.079180 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.079194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.079215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.079231 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.095360 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.110253 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.125822 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.141634 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.151528 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.162798 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.174624 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.181595 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.181666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.181682 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.181706 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.181726 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.185112 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.196347 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:20Z\\\",\\\"message\\\":\\\"2026-02-01T07:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d\\\\n2026-02-01T07:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d to /host/opt/cni/bin/\\\\n2026-02-01T07:22:34Z [verbose] multus-daemon started\\\\n2026-02-01T07:22:34Z [verbose] Readiness Indicator file check\\\\n2026-02-01T07:23:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.212594 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.228531 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.240240 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.254394 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.272908 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.284779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.284829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.284844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.284863 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.284877 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.286828 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.387508 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.387567 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.387584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.387610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.387628 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.490371 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.490472 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.490489 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.490513 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.490529 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.540302 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:36:38.250102985 +0000 UTC Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.565901 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:22 crc kubenswrapper[4835]: E0201 07:23:22.566262 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.566708 4835 scope.go:117] "RemoveContainer" containerID="fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.592726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.592788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.592804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.592826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.592843 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.695798 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.695898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.695937 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.695980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.695999 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.799265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.799346 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.799368 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.799515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.799562 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.902359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.902427 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.902441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.902462 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.902477 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.005984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.006039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.006052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.006069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.006460 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.042252 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/2.log" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.047106 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.048722 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.085331 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.105364 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.108874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.108913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.108927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.108949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.108964 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.131197 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.151853 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.171813 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.182194 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.191063 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.201310 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.210708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.210741 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.210752 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.210768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.210779 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.214066 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.228517 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:20Z\\\",\\\"message\\\":\\\"2026-02-01T07:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d\\\\n2026-02-01T07:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d to /host/opt/cni/bin/\\\\n2026-02-01T07:22:34Z [verbose] multus-daemon started\\\\n2026-02-01T07:22:34Z [verbose] Readiness Indicator file check\\\\n2026-02-01T07:23:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.241266 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.258506 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.270948 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.281908 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.293897 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.308234 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.313181 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.313220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.313231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.313249 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.313262 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.323035 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.416610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.416690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.416715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.416749 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.416774 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.520259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.520363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.520387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.520451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.520480 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.540970 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:58:13.918772562 +0000 UTC Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.566549 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.566583 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.566583 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:23 crc kubenswrapper[4835]: E0201 07:23:23.566831 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:23 crc kubenswrapper[4835]: E0201 07:23:23.567026 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:23 crc kubenswrapper[4835]: E0201 07:23:23.567202 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.622949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.623001 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.623016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.623035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.623049 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.725689 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.725739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.725748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.725761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.725773 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.828764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.828818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.828835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.828857 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.828874 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.932247 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.932318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.932343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.932373 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.932395 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.035364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.035500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.035521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.035546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.035563 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.053534 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/3.log" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.054622 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/2.log" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.058623 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" exitCode=1 Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.058688 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.058737 4835 scope.go:117] "RemoveContainer" containerID="fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.060376 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.061119 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.087259 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.103834 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.124077 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.138390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.138465 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.138483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.138505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.138521 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.142684 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.160929 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.192000 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:23Z\\\",\\\"message\\\":\\\"7:23:23.631079 6884 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:23:23.631159 6884 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631243 6884 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631499 6884 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:23:23.632231 6884 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:23:23.632268 6884 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0201 07:23:23.632293 6884 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:23:23.632309 6884 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:23:23.632431 6884 factory.go:656] Stopping watch factory\\\\nI0201 07:23:23.632451 6884 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:23:23.632677 6884 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:23:23.632717 6884 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0201 07:23:23.632728 6884 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.206285 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.222982 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.241504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.241566 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.241583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.241611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.241632 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.241868 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.260451 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.280755 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:20Z\\\",\\\"message\\\":\\\"2026-02-01T07:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d\\\\n2026-02-01T07:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d to /host/opt/cni/bin/\\\\n2026-02-01T07:22:34Z [verbose] multus-daemon started\\\\n2026-02-01T07:22:34Z [verbose] Readiness Indicator file check\\\\n2026-02-01T07:23:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.303596 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.323067 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.342029 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.344231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.344273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.344293 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.344322 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.344344 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.360155 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.376024 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.392373 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.446942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.446992 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.447008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.447030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.447047 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.541403 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:23:46.06709514 +0000 UTC Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.549692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.549749 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.549771 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.549830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.549856 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.565731 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.565940 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.616518 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.616578 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.616596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.616618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.616634 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.634596 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.639391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.639490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.639516 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.639543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.639564 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.661214 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.665474 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.665523 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.665540 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.665562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.665582 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.685870 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.691022 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.691073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.691089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.691110 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.691126 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.711895 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.717058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.717112 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.717129 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.717153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.717171 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.737201 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.737453 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.739854 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.739905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.739928 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.739955 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.739977 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.842888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.842946 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.842963 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.842984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.842999 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.946065 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.946118 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.946129 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.946149 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.946161 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.055015 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.055090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.055108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.055132 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.055147 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.065329 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/3.log" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.071713 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:23:25 crc kubenswrapper[4835]: E0201 07:23:25.072010 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.094005 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.115962 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.138099 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:20Z\\\",\\\"message\\\":\\\"2026-02-01T07:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d\\\\n2026-02-01T07:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d to /host/opt/cni/bin/\\\\n2026-02-01T07:22:34Z [verbose] multus-daemon started\\\\n2026-02-01T07:22:34Z [verbose] Readiness Indicator file check\\\\n2026-02-01T07:23:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.159095 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.159145 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.159159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.159179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.159192 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.160260 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.177543 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.197346 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.211606 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.224265 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.238090 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.252649 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.261817 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.261907 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.261943 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.261973 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.261996 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.265753 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.283724 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.301800 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.317369 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.338043 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:23Z\\\",\\\"message\\\":\\\"7:23:23.631079 6884 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:23:23.631159 6884 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631243 6884 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631499 6884 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:23:23.632231 6884 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:23:23.632268 6884 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0201 07:23:23.632293 6884 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:23:23.632309 6884 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:23:23.632431 6884 factory.go:656] Stopping watch factory\\\\nI0201 07:23:23.632451 6884 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:23:23.632677 6884 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:23:23.632717 6884 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0201 07:23:23.632728 6884 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.351705 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.364134 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.364180 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.364192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.364208 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.364219 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.366739 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.466642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.466712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.466730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.466756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.466775 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.541881 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:05:48.97353426 +0000 UTC Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.567755 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.567845 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.567937 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:25 crc kubenswrapper[4835]: E0201 07:23:25.567944 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:25 crc kubenswrapper[4835]: E0201 07:23:25.568082 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:25 crc kubenswrapper[4835]: E0201 07:23:25.568163 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.569526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.569579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.569598 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.569623 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.569643 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.671723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.671790 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.671808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.671833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.671854 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.773769 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.773813 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.773824 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.773841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.773853 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.876713 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.876770 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.876780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.876794 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.876803 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.979711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.979765 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.979784 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.979808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.979827 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.082146 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.082592 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.082730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.082761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.082780 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.185828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.186152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.186283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.186624 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.186654 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.290579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.290651 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.290670 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.290695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.290712 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.393138 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.393200 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.393219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.393244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.393261 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.496030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.496316 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.496496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.496630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.496780 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.542938 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:12:39.928042703 +0000 UTC Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.565722 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:26 crc kubenswrapper[4835]: E0201 07:23:26.565874 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.591028 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.600709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.600765 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.600783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.600806 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.600823 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.703015 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.703067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.703084 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.703108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.703124 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.806785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.806861 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.806885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.806916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.806940 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.909270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.909334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.909351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.909377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.909395 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.012520 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.012568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.012585 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.012610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.012627 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.114919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.114984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.115006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.115030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.115048 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.218087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.218141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.218158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.218180 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.218199 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.320792 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.320850 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.320870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.320894 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.320911 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.424198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.424270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.424287 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.424314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.424331 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.528039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.528102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.528124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.528152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.528175 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.543919 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:17:41.053531345 +0000 UTC Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.566548 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.566588 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.566660 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:27 crc kubenswrapper[4835]: E0201 07:23:27.566770 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:27 crc kubenswrapper[4835]: E0201 07:23:27.566889 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:27 crc kubenswrapper[4835]: E0201 07:23:27.567123 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.592292 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.613469 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.630744 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.630815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.630833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.630855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.630871 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.634690 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.651996 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.682516 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb5fed5-5d65-4f0a-a51a-3109fffc9113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a4c738f66e1428697d199630cc541f018b1aa36edcb0e3e3ad32ddab2b5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f76a95142c00257f569b0db87094f23435274cbe36740d658bac63c26a55233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64accb3c02d2092922d2534d7c21dd160d0ed2b2ff1cbc19870174f818ba4486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8444f60530510645c3592013a63e5a5b3cdf6872788309d94d5a18fe1553a937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64bfb072019b8c1917e27199bbb7b1491df307cb14257e4cd502f3062a674890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b8ca0d26229f7f9b48abfd0b2c34737b94ba1564e0b9f913d594d2fbdeb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://084b8ca0d26229f7f9b48abfd0b2c34737b94ba1564e0b9f913d594d2fbdeb13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b3bb2745bd4b232691a2bacf466c147eea6e1068cf4399fd5b46ded7afce49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b3bb2745bd4b232691a2bacf466c147eea6e1068cf4399fd5b46ded7afce49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f420acbcdf8ac32ffbc7f6545be0e96c7e9630fd8285c50cda7cf636deb7769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f420acbcdf8ac32ffbc7f6545be0e96c7e9630fd8285c50cda7cf636deb7769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.699736 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.721975 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.734295 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.734360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.734381 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.734404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.734449 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.741400 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.772808 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:23Z\\\",\\\"message\\\":\\\"7:23:23.631079 6884 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:23:23.631159 6884 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631243 6884 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631499 6884 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:23:23.632231 6884 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:23:23.632268 6884 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0201 07:23:23.632293 6884 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:23:23.632309 6884 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:23:23.632431 6884 factory.go:656] Stopping watch factory\\\\nI0201 07:23:23.632451 6884 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:23:23.632677 6884 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:23:23.632717 6884 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0201 07:23:23.632728 6884 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.793045 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:20Z\\\",\\\"message\\\":\\\"2026-02-01T07:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d\\\\n2026-02-01T07:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d to /host/opt/cni/bin/\\\\n2026-02-01T07:22:34Z [verbose] multus-daemon started\\\\n2026-02-01T07:22:34Z [verbose] Readiness Indicator file check\\\\n2026-02-01T07:23:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.816505 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.836013 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.837841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.837911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.837939 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.837970 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.837992 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.851671 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.871325 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.889617 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.906471 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.925050 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.946308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.946385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.946459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.946497 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.946521 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.948072 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.060881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.060950 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.060968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.060992 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.061008 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.164383 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.164718 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.164732 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.164748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.164759 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.267663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.267715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.267731 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.267753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.267771 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.370561 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.370616 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.370633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.370655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.370672 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.474116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.474157 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.474174 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.474195 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.474213 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.545094 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:21:05.709478972 +0000 UTC Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.566701 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:28 crc kubenswrapper[4835]: E0201 07:23:28.566881 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.577179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.577233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.577259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.577285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.577306 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.679780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.679836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.679857 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.679884 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.679906 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.781977 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.782152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.782175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.782196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.782213 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.885196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.885285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.885304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.885332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.885348 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.988242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.988306 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.988327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.988354 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.988377 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.091976 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.092045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.092068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.092096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.092122 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.195039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.195109 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.195127 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.195152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.195171 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.298056 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.298122 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.298149 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.298179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.298199 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.402005 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.402076 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.402093 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.402118 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.402137 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.505584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.505645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.505663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.505690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.505707 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.545538 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:31:16.956377936 +0000 UTC Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.565864 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.565933 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.566018 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:29 crc kubenswrapper[4835]: E0201 07:23:29.566200 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:29 crc kubenswrapper[4835]: E0201 07:23:29.566351 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:29 crc kubenswrapper[4835]: E0201 07:23:29.566532 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.609087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.609179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.609198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.609227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.609249 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.711892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.711942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.711959 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.711983 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.712003 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.814572 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.814648 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.814674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.814702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.814722 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.918016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.918079 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.918096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.918120 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.918138 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.021232 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.021279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.021295 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.021320 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.021336 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.123553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.123625 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.123642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.123665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.123685 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.228139 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.228215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.228239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.228269 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.228290 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.332018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.332176 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.332206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.332237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.332258 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.434796 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.434860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.434882 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.434913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.434936 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.538063 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.538124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.538141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.538165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.538182 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.545825 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:20:50.927911113 +0000 UTC Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.566237 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:30 crc kubenswrapper[4835]: E0201 07:23:30.566405 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.641916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.641971 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.641987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.642009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.642027 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.744490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.744532 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.744549 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.744570 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.744585 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.847756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.847886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.847910 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.847932 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.847951 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.951860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.951985 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.952004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.952030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.952050 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.054715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.054812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.054830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.054854 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.054875 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.158949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.159011 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.159030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.159054 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.159071 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.262511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.262610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.262627 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.262654 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.262671 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.365942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.366014 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.366039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.366073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.366095 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.415843 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.416213 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:35.416173575 +0000 UTC m=+148.536610049 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.469266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.469321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.469340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.469364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.469387 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.517220 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.517303 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.517382 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517396 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.517537 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517564 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:24:35.517526535 +0000 UTC m=+148.637962999 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517628 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517671 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517695 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517729 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517782 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:24:35.517757411 +0000 UTC m=+148.638193885 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517795 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517835 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:24:35.517804792 +0000 UTC m=+148.638241346 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517839 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517870 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517965 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:24:35.517935925 +0000 UTC m=+148.638372399 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.546352 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:22:42.709787097 +0000 UTC Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.566085 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.566136 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.566288 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.566325 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.566522 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.566664 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.573121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.573174 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.573192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.573217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.573233 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.676285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.676347 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.676366 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.676390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.676522 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.779936 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.779996 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.780017 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.780048 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.780068 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.882618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.882676 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.882692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.882714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.882731 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.985939 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.985994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.986007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.986024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.986036 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.089498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.089563 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.089580 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.089609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.089626 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.192204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.192269 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.192288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.192314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.192333 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.295639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.295711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.295729 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.295756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.295777 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.398223 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.398288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.398310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.398340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.398361 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.501761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.501827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.501847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.501873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.501891 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.547491 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:09:23.632497695 +0000 UTC Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.565925 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:32 crc kubenswrapper[4835]: E0201 07:23:32.566080 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.605152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.605208 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.605225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.605250 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.605267 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.708276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.708366 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.708390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.708488 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.708518 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.811483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.811557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.811579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.811612 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.811633 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.914964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.915378 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.915593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.915751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.915913 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.019748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.019835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.019858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.019891 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.019915 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.122735 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.122797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.122814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.122838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.122855 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.226212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.226555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.226663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.226785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.226866 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.329364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.329727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.329819 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.329903 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.329987 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.433510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.433575 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.433591 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.433617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.433636 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.536942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.537006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.537024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.537051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.537070 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.548461 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:38:52.227991041 +0000 UTC Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.566160 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:33 crc kubenswrapper[4835]: E0201 07:23:33.566338 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.566157 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:33 crc kubenswrapper[4835]: E0201 07:23:33.566672 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.567112 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:33 crc kubenswrapper[4835]: E0201 07:23:33.567491 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.639764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.639826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.639846 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.639870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.639887 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.742777 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.742873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.742892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.742917 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.742968 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.845493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.845584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.845609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.845643 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.845669 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.949537 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.949620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.949641 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.949666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.949683 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.053599 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.053689 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.053713 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.053742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.053769 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.156642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.156711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.156728 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.156754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.156772 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.259714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.259780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.259797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.259825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.259844 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.362116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.362182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.362199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.362225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.362242 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.465318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.465382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.465397 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.465451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.465473 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.549204 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:08:41.344111267 +0000 UTC Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.566748 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:34 crc kubenswrapper[4835]: E0201 07:23:34.567076 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.569214 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.569279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.569297 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.569319 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.569334 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.585725 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.673444 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.673503 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.673521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.673543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.673565 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.776004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.776072 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.776090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.776114 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.776131 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.879404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.879525 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.879548 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.879579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.879600 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.982360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.982447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.982467 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.982490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.982509 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.085907 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.085968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.085985 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.086009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.086030 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.091085 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.091131 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.091148 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.091168 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.091189 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.113652 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.118461 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.118515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.118535 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.118570 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.118605 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.138123 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.149504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.149571 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.149588 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.149613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.149631 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.170543 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.175401 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.175699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.175904 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.176136 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.176355 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.196289 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.201113 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.201171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.201192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.201224 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.201245 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.221027 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.221244 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.224080 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.224152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.224176 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.224204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.224224 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.327524 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.327596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.327614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.327640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.327657 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.430840 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.430895 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.430913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.430936 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.430952 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.534138 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.534199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.534221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.534273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.534296 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.549739 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 01:46:48.93853507 +0000 UTC Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.566453 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.566553 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.566552 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.566670 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.566764 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.566947 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.637087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.637143 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.637160 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.637182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.637201 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.739315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.739378 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.739398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.739468 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.739498 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.842001 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.842073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.842096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.842125 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.842147 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.945622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.945696 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.945866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.945910 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.946003 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.048749 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.048811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.048827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.048853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.048869 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.152811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.152891 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.152911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.152948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.152974 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.256241 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.256296 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.256313 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.256337 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.256354 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.359813 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.359964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.359984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.360009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.360026 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.463135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.463190 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.463204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.463225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.463241 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.550686 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 04:09:53.669500345 +0000 UTC Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.565921 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.566035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.566073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.566083 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.566098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.566107 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: E0201 07:23:36.566342 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.566591 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:23:36 crc kubenswrapper[4835]: E0201 07:23:36.566752 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.669394 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.669871 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.670032 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.670185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.670326 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.774543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.774633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.774658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.774689 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.774708 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.878070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.878189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.878210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.878235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.878255 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.981400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.981498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.981526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.981555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.981575 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.085107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.085203 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.085221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.085246 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.085265 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.190485 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.190559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.190579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.190605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.190622 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.294206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.294271 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.294288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.294312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.294330 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.398143 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.398219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.398237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.398266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.398284 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.501707 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.501769 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.501789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.501814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.501831 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.551404 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:18:53.489807105 +0000 UTC Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.566325 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.566340 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.566358 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:37 crc kubenswrapper[4835]: E0201 07:23:37.566775 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:37 crc kubenswrapper[4835]: E0201 07:23:37.566582 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:37 crc kubenswrapper[4835]: E0201 07:23:37.566933 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.592557 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.606193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.606285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.606307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.606337 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.606356 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.615091 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.636853 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:20Z\\\",\\\"message\\\":\\\"2026-02-01T07:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d\\\\n2026-02-01T07:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d to /host/opt/cni/bin/\\\\n2026-02-01T07:22:34Z [verbose] multus-daemon started\\\\n2026-02-01T07:22:34Z [verbose] Readiness Indicator file check\\\\n2026-02-01T07:23:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.658275 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.666884 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.678613 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.690120 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.705671 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.709637 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.709685 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.709701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.709717 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.709729 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.719062 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.734427 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.748342 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.762634 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bcb829b-af6e-4f40-b31d-9abcf38c53e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44751d1619bcacbde4be80603e618132541e8aea35b1bea6e6d8805ac2a35c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://857b570e7ae7dd450284342c471cf02691b7fa7eb5bd24ad05e6dd0115d1ff2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://857b570e7ae7dd450284342c471cf02691b7fa7eb5bd24ad05e6dd0115d1ff2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.787915 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb5fed5-5d65-4f0a-a51a-3109fffc9113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a4c738f66e1428697d199630cc541f018b1aa36edcb0e3e3ad32ddab2b5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f76a95142c00257f569b0db87094f23435274cbe36740d658bac63c26a55233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64accb3c02d2092922d2534d7c21dd160d0ed2b2ff1cbc19870174f818ba4486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8444f60530510645c3592013a63e5a5b3cdf6872788309d94d5a18fe1553a937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64bfb072019b8c1917e27199bbb7b1491df307cb14257e4cd502f3062a674890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b8ca0d26229f7f9b48abfd0b2c34737b94ba1564e0b9f913d594d2fbdeb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://084b8ca0d26229f7f9b48abfd0b2c34737b94ba1564e0b9f913d594d2fbdeb13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b3bb2745bd4b232691a2bacf466c147eea6e1068cf4399fd5b46ded7afce49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b3bb2745bd4b232691a2bacf466c147eea6e1068cf4399fd5b46ded7afce49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f420acbcdf8ac32ffbc7f6545be0e96c7e9630fd8285c50cda7cf636deb7769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f420acbcdf8ac32ffbc7f6545be0e96c7e9630fd8285c50cda7cf636deb7769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.807836 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.817213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.817271 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.817284 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.817307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.817330 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.826837 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.846487 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.870620 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:23Z\\\",\\\"message\\\":\\\"7:23:23.631079 6884 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:23:23.631159 6884 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631243 6884 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631499 6884 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:23:23.632231 6884 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:23:23.632268 6884 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0201 07:23:23.632293 6884 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:23:23.632309 6884 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:23:23.632431 6884 factory.go:656] Stopping watch factory\\\\nI0201 07:23:23.632451 6884 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:23:23.632677 6884 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:23:23.632717 6884 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0201 07:23:23.632728 6884 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.886171 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.901213 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.921542 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.921624 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.921722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.921786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.921843 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.024352 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.024830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.024848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.024874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.024896 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.128377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.128477 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.128497 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.128524 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.128547 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.231866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.231933 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.231951 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.231977 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.231996 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.335803 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.335860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.335880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.335905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.335924 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.439559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.439606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.439623 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.439645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.439662 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.544625 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.544688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.544704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.544730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.544747 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.552297 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:21:02.965682753 +0000 UTC Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.565880 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:38 crc kubenswrapper[4835]: E0201 07:23:38.566046 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.648007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.648067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.648085 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.648109 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.648126 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.752460 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.752527 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.752545 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.752570 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.752591 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.855940 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.856002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.856022 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.856048 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.856067 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.959772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.959834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.959850 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.959873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.959890 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.062469 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.062539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.062557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.062583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.062601 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.166153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.166215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.166233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.166257 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.166277 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.269672 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.269756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.270640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.270724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.270749 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.373526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.373569 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.373579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.373593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.373602 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.476773 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.476837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.476854 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.476877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.476894 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.553017 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:46:53.234118994 +0000 UTC Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.565914 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:39 crc kubenswrapper[4835]: E0201 07:23:39.566306 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.566028 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.565947 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:39 crc kubenswrapper[4835]: E0201 07:23:39.566868 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:39 crc kubenswrapper[4835]: E0201 07:23:39.567093 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.579688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.579764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.579787 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.579821 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.579840 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.683885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.683951 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.683974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.684006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.684027 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.787903 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.788174 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.788254 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.788337 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.788401 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.891288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.891344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.891364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.891389 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.891414 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.994565 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.994949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.995082 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.995211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.995346 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.099057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.099112 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.099123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.099141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.099156 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.202991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.203057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.203070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.203091 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.203105 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.306148 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.306209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.306229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.306253 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.306270 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.409052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.409138 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.409151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.409169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.409183 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.511716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.511802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.511823 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.511847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.511864 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.553993 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:02:28.213253686 +0000 UTC Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.566630 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:40 crc kubenswrapper[4835]: E0201 07:23:40.566807 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.614618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.614680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.614699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.614727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.614744 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.717802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.717862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.717880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.717912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.717937 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.821245 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.821308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.821324 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.821354 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.821371 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.924543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.924602 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.924647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.924670 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.924693 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.027213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.027277 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.027295 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.027321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.027339 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.130607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.130728 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.130749 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.130774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.130791 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.234327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.234392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.234415 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.234470 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.234489 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.338380 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.338502 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.338530 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.338559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.338581 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.442125 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.442261 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.442279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.442304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.442321 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.545757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.545865 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.545883 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.545909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.545926 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.555262 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:17:51.025772592 +0000 UTC Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.565817 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.565983 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:41 crc kubenswrapper[4835]: E0201 07:23:41.566184 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.566300 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:41 crc kubenswrapper[4835]: E0201 07:23:41.566596 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:41 crc kubenswrapper[4835]: E0201 07:23:41.566800 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.650404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.650523 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.650550 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.650575 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.650593 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.754103 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.754171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.754195 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.754248 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.754274 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.857049 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.857189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.857210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.857232 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.857251 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.960511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.960559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.960569 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.960587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.960600 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.064168 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.064242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.064262 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.064291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.064310 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.166954 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.167013 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.167027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.167045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.167057 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.270253 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.270308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.270325 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.270349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.270366 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.373116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.373155 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.373165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.373182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.373192 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.476555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.476656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.476675 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.476709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.476904 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.556379 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 04:37:18.454397013 +0000 UTC Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.566261 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:42 crc kubenswrapper[4835]: E0201 07:23:42.566394 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.579219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.579296 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.579321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.579351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.579371 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.683219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.683282 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.683300 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.683325 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.683343 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.787278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.787586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.787611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.787649 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.787672 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.891466 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.891541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.891589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.891620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.891641 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.995163 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.995223 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.995238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.995260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.995273 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.098976 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.099064 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.099083 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.099116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.099137 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.202469 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.202544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.202559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.202580 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.202593 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.305294 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.305404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.305485 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.305519 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.305540 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.409455 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.409554 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.409574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.409607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.409629 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.513075 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.513150 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.513167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.513191 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.513208 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.557596 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:02:01.745699542 +0000 UTC Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.565982 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.566088 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:43 crc kubenswrapper[4835]: E0201 07:23:43.566190 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.566100 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:43 crc kubenswrapper[4835]: E0201 07:23:43.566348 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:43 crc kubenswrapper[4835]: E0201 07:23:43.566533 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.615938 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.616002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.616019 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.616045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.616065 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.719542 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.719605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.719623 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.719650 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.719669 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.822805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.822880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.822903 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.822935 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.822957 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.925950 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.926042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.926067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.926101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.926124 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.029721 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.029778 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.029791 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.029810 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.029826 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.133226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.133306 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.133325 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.133355 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.133378 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.236363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.236491 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.236511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.236537 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.236556 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.339431 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.339509 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.339527 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.339553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.339569 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.442392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.442486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.442505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.442530 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.442546 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.545050 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.545122 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.545140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.545167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.545200 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.558485 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:06:59.155748883 +0000 UTC Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.565932 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:44 crc kubenswrapper[4835]: E0201 07:23:44.566094 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.648235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.648289 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.648303 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.648321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.648333 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.750605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.750656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.750666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.750686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.750697 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.854142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.854195 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.854223 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.854244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.854256 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.957226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.957261 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.957269 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.957282 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.957290 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.060837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.060887 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.060902 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.060919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.060932 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:45Z","lastTransitionTime":"2026-02-01T07:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.163718 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.163785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.163808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.163838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.163859 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:45Z","lastTransitionTime":"2026-02-01T07:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.265999 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.266067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.266088 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.266117 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.266140 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:45Z","lastTransitionTime":"2026-02-01T07:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.369677 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.369737 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.369759 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.369789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.369812 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:45Z","lastTransitionTime":"2026-02-01T07:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.472666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.472754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.472775 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.472803 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.472827 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:45Z","lastTransitionTime":"2026-02-01T07:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.559135 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 18:36:29.856981874 +0000 UTC Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.566861 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:45 crc kubenswrapper[4835]: E0201 07:23:45.567023 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.566880 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.567116 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:45 crc kubenswrapper[4835]: E0201 07:23:45.567313 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:45 crc kubenswrapper[4835]: E0201 07:23:45.567360 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.575059 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.575120 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.575139 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.575164 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.575183 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:45Z","lastTransitionTime":"2026-02-01T07:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.619603 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.619653 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.619668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.619690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.619703 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:45Z","lastTransitionTime":"2026-02-01T07:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.690248 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg"] Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.690821 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.694515 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.694617 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.694648 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.695013 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.715107 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=11.715079168 podStartE2EDuration="11.715079168s" podCreationTimestamp="2026-02-01 07:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.715029167 +0000 UTC m=+98.835465641" watchObservedRunningTime="2026-02-01 07:23:45.715079168 +0000 UTC m=+98.835515642" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.768821 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=19.768795165 podStartE2EDuration="19.768795165s" podCreationTimestamp="2026-02-01 07:23:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.750768383 +0000 UTC m=+98.871204817" watchObservedRunningTime="2026-02-01 07:23:45.768795165 +0000 UTC m=+98.889231639" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.769116 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.769107682 podStartE2EDuration="1m18.769107682s" podCreationTimestamp="2026-02-01 07:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.768660221 +0000 UTC m=+98.889096685" watchObservedRunningTime="2026-02-01 07:23:45.769107682 +0000 UTC m=+98.889544156" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.795809 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=42.795783301 podStartE2EDuration="42.795783301s" podCreationTimestamp="2026-02-01 07:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.795495714 +0000 UTC m=+98.915932188" watchObservedRunningTime="2026-02-01 07:23:45.795783301 +0000 UTC m=+98.916219765" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.800016 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84a06568-5100-4aac-b537-c6ed932d9398-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.800095 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/84a06568-5100-4aac-b537-c6ed932d9398-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.800142 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a06568-5100-4aac-b537-c6ed932d9398-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.800226 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/84a06568-5100-4aac-b537-c6ed932d9398-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.800259 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84a06568-5100-4aac-b537-c6ed932d9398-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.849007 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podStartSLOduration=73.848970424 podStartE2EDuration="1m13.848970424s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.832003858 +0000 UTC m=+98.952440302" watchObservedRunningTime="2026-02-01 07:23:45.848970424 +0000 UTC m=+98.969406898" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.900788 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84a06568-5100-4aac-b537-c6ed932d9398-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.900843 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/84a06568-5100-4aac-b537-c6ed932d9398-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.900875 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a06568-5100-4aac-b537-c6ed932d9398-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.900937 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/84a06568-5100-4aac-b537-c6ed932d9398-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.900959 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84a06568-5100-4aac-b537-c6ed932d9398-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.900991 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/84a06568-5100-4aac-b537-c6ed932d9398-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.901084 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/84a06568-5100-4aac-b537-c6ed932d9398-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.901897 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84a06568-5100-4aac-b537-c6ed932d9398-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.911381 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a06568-5100-4aac-b537-c6ed932d9398-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.926029 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84a06568-5100-4aac-b537-c6ed932d9398-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.939310 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" podStartSLOduration=72.939278017 podStartE2EDuration="1m12.939278017s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.937060821 +0000 UTC m=+99.057497265" watchObservedRunningTime="2026-02-01 07:23:45.939278017 +0000 UTC m=+99.059714491" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.939996 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-l7rwg" podStartSLOduration=73.939981714 podStartE2EDuration="1m13.939981714s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.917622804 +0000 UTC m=+99.038059248" watchObservedRunningTime="2026-02-01 07:23:45.939981714 +0000 UTC m=+99.060418218" Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.013367 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.015482 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-25s9j" podStartSLOduration=74.015453036 podStartE2EDuration="1m14.015453036s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.987200708 +0000 UTC m=+99.107637182" watchObservedRunningTime="2026-02-01 07:23:46.015453036 +0000 UTC m=+99.135889510" Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.038930 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" podStartSLOduration=74.038905443 podStartE2EDuration="1m14.038905443s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:46.017092177 +0000 UTC m=+99.137528631" watchObservedRunningTime="2026-02-01 07:23:46.038905443 +0000 UTC m=+99.159341887" Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.039335 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=77.039328734 podStartE2EDuration="1m17.039328734s" podCreationTimestamp="2026-02-01 07:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:46.038201646 +0000 UTC m=+99.158638110" watchObservedRunningTime="2026-02-01 07:23:46.039328734 +0000 UTC m=+99.159765188" Feb 01 07:23:46 crc kubenswrapper[4835]: W0201 07:23:46.043557 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84a06568_5100_4aac_b537_c6ed932d9398.slice/crio-8cb8363ed955232f2c7f1971d85315af8c1054d141fda8c44a08506885e00182 WatchSource:0}: Error finding container 8cb8363ed955232f2c7f1971d85315af8c1054d141fda8c44a08506885e00182: Status 404 returned error can't find the container with id 8cb8363ed955232f2c7f1971d85315af8c1054d141fda8c44a08506885e00182 Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.096279 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-d8kfl" podStartSLOduration=74.09624048 podStartE2EDuration="1m14.09624048s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:46.095256885 +0000 UTC m=+99.215693369" watchObservedRunningTime="2026-02-01 07:23:46.09624048 +0000 UTC m=+99.216676924" Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.145710 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" event={"ID":"84a06568-5100-4aac-b537-c6ed932d9398","Type":"ContainerStarted","Data":"8cb8363ed955232f2c7f1971d85315af8c1054d141fda8c44a08506885e00182"} Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.559683 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:59:17.407771615 +0000 UTC Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.559767 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.565878 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:46 crc kubenswrapper[4835]: E0201 07:23:46.566143 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.570221 4835 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 01 07:23:47 crc kubenswrapper[4835]: I0201 07:23:47.151842 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" event={"ID":"84a06568-5100-4aac-b537-c6ed932d9398","Type":"ContainerStarted","Data":"af1169e57b8eabf748a9eb4a93e85dc64ac61ebfa16dd47206d4e5528bb046f9"} Feb 01 07:23:47 crc kubenswrapper[4835]: I0201 07:23:47.175725 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" podStartSLOduration=75.17569643 podStartE2EDuration="1m15.17569643s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:47.173575757 +0000 UTC m=+100.294012231" watchObservedRunningTime="2026-02-01 07:23:47.17569643 +0000 UTC m=+100.296132904" Feb 01 07:23:47 crc kubenswrapper[4835]: I0201 07:23:47.566117 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:47 crc kubenswrapper[4835]: I0201 07:23:47.566237 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:47 crc kubenswrapper[4835]: I0201 07:23:47.566317 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:47 crc kubenswrapper[4835]: E0201 07:23:47.567671 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:47 crc kubenswrapper[4835]: E0201 07:23:47.567817 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:47 crc kubenswrapper[4835]: E0201 07:23:47.568024 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:48 crc kubenswrapper[4835]: I0201 07:23:48.566742 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:48 crc kubenswrapper[4835]: E0201 07:23:48.567341 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:48 crc kubenswrapper[4835]: I0201 07:23:48.567766 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:23:48 crc kubenswrapper[4835]: E0201 07:23:48.568034 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:23:49 crc kubenswrapper[4835]: I0201 07:23:49.566501 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:49 crc kubenswrapper[4835]: I0201 07:23:49.566571 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:49 crc kubenswrapper[4835]: I0201 07:23:49.566631 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:49 crc kubenswrapper[4835]: E0201 07:23:49.566773 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:49 crc kubenswrapper[4835]: E0201 07:23:49.566999 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:49 crc kubenswrapper[4835]: E0201 07:23:49.567121 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:50 crc kubenswrapper[4835]: I0201 07:23:50.566672 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:50 crc kubenswrapper[4835]: E0201 07:23:50.566823 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:51 crc kubenswrapper[4835]: I0201 07:23:51.565927 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:51 crc kubenswrapper[4835]: I0201 07:23:51.565997 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:51 crc kubenswrapper[4835]: E0201 07:23:51.566183 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:51 crc kubenswrapper[4835]: I0201 07:23:51.566442 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:51 crc kubenswrapper[4835]: E0201 07:23:51.566585 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:51 crc kubenswrapper[4835]: E0201 07:23:51.566791 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:51 crc kubenswrapper[4835]: I0201 07:23:51.673514 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:51 crc kubenswrapper[4835]: E0201 07:23:51.673777 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:23:51 crc kubenswrapper[4835]: E0201 07:23:51.673888 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:24:55.673859179 +0000 UTC m=+168.794295643 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:23:52 crc kubenswrapper[4835]: I0201 07:23:52.566633 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:52 crc kubenswrapper[4835]: E0201 07:23:52.566764 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:53 crc kubenswrapper[4835]: I0201 07:23:53.566734 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:53 crc kubenswrapper[4835]: I0201 07:23:53.566783 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:53 crc kubenswrapper[4835]: E0201 07:23:53.566929 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:53 crc kubenswrapper[4835]: I0201 07:23:53.567240 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:53 crc kubenswrapper[4835]: E0201 07:23:53.567340 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:53 crc kubenswrapper[4835]: E0201 07:23:53.567584 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:54 crc kubenswrapper[4835]: I0201 07:23:54.566273 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:54 crc kubenswrapper[4835]: E0201 07:23:54.566501 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:55 crc kubenswrapper[4835]: I0201 07:23:55.566120 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:55 crc kubenswrapper[4835]: I0201 07:23:55.566204 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:55 crc kubenswrapper[4835]: I0201 07:23:55.566207 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:55 crc kubenswrapper[4835]: E0201 07:23:55.566287 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:55 crc kubenswrapper[4835]: E0201 07:23:55.566460 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:55 crc kubenswrapper[4835]: E0201 07:23:55.566609 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:56 crc kubenswrapper[4835]: I0201 07:23:56.566297 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:56 crc kubenswrapper[4835]: E0201 07:23:56.566916 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:57 crc kubenswrapper[4835]: I0201 07:23:57.565929 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:57 crc kubenswrapper[4835]: I0201 07:23:57.566032 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:57 crc kubenswrapper[4835]: I0201 07:23:57.566168 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:57 crc kubenswrapper[4835]: E0201 07:23:57.567800 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:57 crc kubenswrapper[4835]: E0201 07:23:57.568028 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:57 crc kubenswrapper[4835]: E0201 07:23:57.568124 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:58 crc kubenswrapper[4835]: I0201 07:23:58.566494 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:58 crc kubenswrapper[4835]: E0201 07:23:58.566676 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:59 crc kubenswrapper[4835]: I0201 07:23:59.566639 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:59 crc kubenswrapper[4835]: I0201 07:23:59.566721 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:59 crc kubenswrapper[4835]: I0201 07:23:59.566795 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:59 crc kubenswrapper[4835]: E0201 07:23:59.566830 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:59 crc kubenswrapper[4835]: E0201 07:23:59.566906 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:59 crc kubenswrapper[4835]: E0201 07:23:59.567070 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:00 crc kubenswrapper[4835]: I0201 07:24:00.566688 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:00 crc kubenswrapper[4835]: E0201 07:24:00.567377 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:00 crc kubenswrapper[4835]: I0201 07:24:00.568189 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:24:00 crc kubenswrapper[4835]: E0201 07:24:00.568740 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:24:01 crc kubenswrapper[4835]: I0201 07:24:01.566065 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:01 crc kubenswrapper[4835]: I0201 07:24:01.566214 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:01 crc kubenswrapper[4835]: E0201 07:24:01.566234 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:01 crc kubenswrapper[4835]: I0201 07:24:01.566301 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:01 crc kubenswrapper[4835]: E0201 07:24:01.566396 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:01 crc kubenswrapper[4835]: E0201 07:24:01.566634 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:02 crc kubenswrapper[4835]: I0201 07:24:02.566626 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:02 crc kubenswrapper[4835]: E0201 07:24:02.566826 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:03 crc kubenswrapper[4835]: I0201 07:24:03.566090 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:03 crc kubenswrapper[4835]: I0201 07:24:03.566144 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:03 crc kubenswrapper[4835]: E0201 07:24:03.566324 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:03 crc kubenswrapper[4835]: I0201 07:24:03.566692 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:03 crc kubenswrapper[4835]: E0201 07:24:03.567125 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:03 crc kubenswrapper[4835]: E0201 07:24:03.567264 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:04 crc kubenswrapper[4835]: I0201 07:24:04.566282 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:04 crc kubenswrapper[4835]: E0201 07:24:04.566476 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:05 crc kubenswrapper[4835]: I0201 07:24:05.566123 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:05 crc kubenswrapper[4835]: I0201 07:24:05.566220 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:05 crc kubenswrapper[4835]: E0201 07:24:05.566305 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:05 crc kubenswrapper[4835]: E0201 07:24:05.566477 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:05 crc kubenswrapper[4835]: I0201 07:24:05.566532 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:05 crc kubenswrapper[4835]: E0201 07:24:05.566687 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:06 crc kubenswrapper[4835]: I0201 07:24:06.566072 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:06 crc kubenswrapper[4835]: E0201 07:24:06.566401 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.224173 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/1.log" Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.224975 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/0.log" Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.225044 4835 generic.go:334] "Generic (PLEG): container finished" podID="c9342eb7-b5ae-47b2-a56d-91ae886e5f0e" containerID="c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27" exitCode=1 Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.225090 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerDied","Data":"c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27"} Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.225137 4835 scope.go:117] "RemoveContainer" containerID="213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd" Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.226017 4835 scope.go:117] "RemoveContainer" containerID="c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27" Feb 01 07:24:07 crc kubenswrapper[4835]: E0201 07:24:07.226453 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-25s9j_openshift-multus(c9342eb7-b5ae-47b2-a56d-91ae886e5f0e)\"" pod="openshift-multus/multus-25s9j" podUID="c9342eb7-b5ae-47b2-a56d-91ae886e5f0e" Feb 01 07:24:07 crc kubenswrapper[4835]: E0201 07:24:07.506208 4835 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.566030 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.566111 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.566533 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:07 crc kubenswrapper[4835]: E0201 07:24:07.566822 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:07 crc kubenswrapper[4835]: E0201 07:24:07.566951 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:07 crc kubenswrapper[4835]: E0201 07:24:07.567067 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:07 crc kubenswrapper[4835]: E0201 07:24:07.683647 4835 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 01 07:24:08 crc kubenswrapper[4835]: I0201 07:24:08.231347 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/1.log" Feb 01 07:24:08 crc kubenswrapper[4835]: I0201 07:24:08.566506 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:08 crc kubenswrapper[4835]: E0201 07:24:08.566695 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:09 crc kubenswrapper[4835]: I0201 07:24:09.566086 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:09 crc kubenswrapper[4835]: I0201 07:24:09.566152 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:09 crc kubenswrapper[4835]: I0201 07:24:09.566108 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:09 crc kubenswrapper[4835]: E0201 07:24:09.566310 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:09 crc kubenswrapper[4835]: E0201 07:24:09.566462 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:09 crc kubenswrapper[4835]: E0201 07:24:09.566723 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:10 crc kubenswrapper[4835]: I0201 07:24:10.566126 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:10 crc kubenswrapper[4835]: E0201 07:24:10.566321 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:11 crc kubenswrapper[4835]: I0201 07:24:11.566988 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:11 crc kubenswrapper[4835]: I0201 07:24:11.567071 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:11 crc kubenswrapper[4835]: I0201 07:24:11.567106 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:11 crc kubenswrapper[4835]: E0201 07:24:11.567242 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:11 crc kubenswrapper[4835]: E0201 07:24:11.567616 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:11 crc kubenswrapper[4835]: E0201 07:24:11.567760 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:11 crc kubenswrapper[4835]: I0201 07:24:11.568924 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:24:12 crc kubenswrapper[4835]: I0201 07:24:12.250721 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/3.log" Feb 01 07:24:12 crc kubenswrapper[4835]: I0201 07:24:12.253978 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} Feb 01 07:24:12 crc kubenswrapper[4835]: I0201 07:24:12.254566 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:24:12 crc kubenswrapper[4835]: I0201 07:24:12.286178 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podStartSLOduration=100.286160084 podStartE2EDuration="1m40.286160084s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:12.284841281 +0000 UTC m=+125.405277745" watchObservedRunningTime="2026-02-01 07:24:12.286160084 +0000 UTC m=+125.406596528" Feb 01 07:24:12 crc kubenswrapper[4835]: I0201 07:24:12.565961 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:12 crc kubenswrapper[4835]: E0201 07:24:12.566230 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:12 crc kubenswrapper[4835]: I0201 07:24:12.630235 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2msm5"] Feb 01 07:24:12 crc kubenswrapper[4835]: I0201 07:24:12.630529 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:12 crc kubenswrapper[4835]: E0201 07:24:12.630832 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:12 crc kubenswrapper[4835]: E0201 07:24:12.685932 4835 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 01 07:24:13 crc kubenswrapper[4835]: I0201 07:24:13.566758 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:13 crc kubenswrapper[4835]: I0201 07:24:13.567003 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:13 crc kubenswrapper[4835]: E0201 07:24:13.567258 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:13 crc kubenswrapper[4835]: E0201 07:24:13.567489 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:14 crc kubenswrapper[4835]: I0201 07:24:14.566772 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:14 crc kubenswrapper[4835]: I0201 07:24:14.566876 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:14 crc kubenswrapper[4835]: E0201 07:24:14.566955 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:14 crc kubenswrapper[4835]: E0201 07:24:14.567110 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:15 crc kubenswrapper[4835]: I0201 07:24:15.566002 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:15 crc kubenswrapper[4835]: I0201 07:24:15.566087 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:15 crc kubenswrapper[4835]: E0201 07:24:15.566210 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:15 crc kubenswrapper[4835]: E0201 07:24:15.566951 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:16 crc kubenswrapper[4835]: I0201 07:24:16.180748 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:24:16 crc kubenswrapper[4835]: I0201 07:24:16.566752 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:16 crc kubenswrapper[4835]: I0201 07:24:16.566774 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:16 crc kubenswrapper[4835]: E0201 07:24:16.566939 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:16 crc kubenswrapper[4835]: E0201 07:24:16.567085 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:17 crc kubenswrapper[4835]: I0201 07:24:17.565722 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:17 crc kubenswrapper[4835]: I0201 07:24:17.565722 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:17 crc kubenswrapper[4835]: E0201 07:24:17.567943 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:17 crc kubenswrapper[4835]: E0201 07:24:17.567854 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:17 crc kubenswrapper[4835]: E0201 07:24:17.687576 4835 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 01 07:24:18 crc kubenswrapper[4835]: I0201 07:24:18.566187 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:18 crc kubenswrapper[4835]: I0201 07:24:18.566233 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:18 crc kubenswrapper[4835]: E0201 07:24:18.566378 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:18 crc kubenswrapper[4835]: E0201 07:24:18.567008 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:19 crc kubenswrapper[4835]: I0201 07:24:19.566558 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:19 crc kubenswrapper[4835]: I0201 07:24:19.566633 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:19 crc kubenswrapper[4835]: E0201 07:24:19.566735 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:19 crc kubenswrapper[4835]: E0201 07:24:19.566932 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:20 crc kubenswrapper[4835]: I0201 07:24:20.566335 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:20 crc kubenswrapper[4835]: I0201 07:24:20.566474 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:20 crc kubenswrapper[4835]: E0201 07:24:20.566538 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:20 crc kubenswrapper[4835]: E0201 07:24:20.566655 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:20 crc kubenswrapper[4835]: I0201 07:24:20.567799 4835 scope.go:117] "RemoveContainer" containerID="c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27" Feb 01 07:24:21 crc kubenswrapper[4835]: I0201 07:24:21.291519 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/1.log" Feb 01 07:24:21 crc kubenswrapper[4835]: I0201 07:24:21.291946 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerStarted","Data":"bc898c375e02b77f5d0608257a9dc49631ac50c8ceab7e6be8a7327889f64c22"} Feb 01 07:24:21 crc kubenswrapper[4835]: I0201 07:24:21.566144 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:21 crc kubenswrapper[4835]: E0201 07:24:21.566293 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:21 crc kubenswrapper[4835]: I0201 07:24:21.566605 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:21 crc kubenswrapper[4835]: E0201 07:24:21.566824 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:22 crc kubenswrapper[4835]: I0201 07:24:22.566714 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:22 crc kubenswrapper[4835]: E0201 07:24:22.566899 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:22 crc kubenswrapper[4835]: I0201 07:24:22.566741 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:22 crc kubenswrapper[4835]: E0201 07:24:22.567004 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:23 crc kubenswrapper[4835]: I0201 07:24:23.566296 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:23 crc kubenswrapper[4835]: I0201 07:24:23.566314 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:23 crc kubenswrapper[4835]: I0201 07:24:23.569786 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 01 07:24:23 crc kubenswrapper[4835]: I0201 07:24:23.575343 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 01 07:24:23 crc kubenswrapper[4835]: I0201 07:24:23.576122 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 01 07:24:23 crc kubenswrapper[4835]: I0201 07:24:23.576570 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 01 07:24:24 crc kubenswrapper[4835]: I0201 07:24:24.566106 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:24 crc kubenswrapper[4835]: I0201 07:24:24.566124 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:24 crc kubenswrapper[4835]: I0201 07:24:24.572116 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 01 07:24:24 crc kubenswrapper[4835]: I0201 07:24:24.572189 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.388555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.436460 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bztv4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.437364 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.438533 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-547k6"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.439349 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.440223 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.440860 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.446339 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.446658 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.447088 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.447172 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.449133 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.451284 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.451582 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.451755 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.452129 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.452276 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.452921 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.453056 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.453153 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.453164 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.455703 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.453955 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.456129 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457062 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-client-ca\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457153 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-etcd-client\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457222 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-etcd-serving-ca\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457281 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457317 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-image-import-ca\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457391 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j58tf\" (UniqueName: \"kubernetes.io/projected/5b3e26c6-a029-4767-b371-579d2c682296-kube-api-access-j58tf\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457499 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457495 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-node-pullsecrets\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457701 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-config\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457756 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b3e26c6-a029-4767-b371-579d2c682296-auth-proxy-config\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457818 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5b3e26c6-a029-4767-b371-579d2c682296-machine-approver-tls\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457878 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458022 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-serving-cert\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458191 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-config\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458270 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-audit-dir\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458335 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qckj9\" (UniqueName: \"kubernetes.io/projected/46f4b60b-0076-4087-b541-4617c3752687-kube-api-access-qckj9\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458503 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-encryption-config\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458573 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b3e26c6-a029-4767-b371-579d2c682296-config\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458628 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2chhv\" (UniqueName: \"kubernetes.io/projected/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-kube-api-access-2chhv\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458716 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4b60b-0076-4087-b541-4617c3752687-serving-cert\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458784 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-audit\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.460435 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-whqd4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.461369 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.465693 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.466059 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.466539 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.467115 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.469824 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.478727 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.479431 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.480442 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tkff4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.480965 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.481068 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.482292 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.484042 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-t4w45"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.484656 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.489868 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.490405 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.491256 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.492608 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x4ddr"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.493234 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.494328 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8hgqx"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.494962 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.541730 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.542250 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.544090 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.544484 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.544923 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hpgql"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.545198 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.546255 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.546338 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.546480 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.550952 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.551132 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.553040 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.553152 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.553251 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.553350 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.553665 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.553908 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554064 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554182 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554355 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554462 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554534 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554680 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554720 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554813 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554852 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554684 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554972 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.560023 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.560442 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xgqrp"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.560742 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-k8v8n"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.561014 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k8v8n" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.561301 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.561577 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562005 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562271 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562712 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-console-config\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562737 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-audit\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562769 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562788 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03f29b26-d2bd-48e2-9804-c90a5315658c-audit-dir\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562805 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8924e4db-3c47-4e66-90d1-e74e49f3a65d-images\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562819 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562843 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-client-ca\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562863 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-trusted-ca\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562878 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8z7\" (UniqueName: \"kubernetes.io/projected/9154a093-1841-44f5-a71d-e42f5c19dfba-kube-api-access-td8z7\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562895 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-etcd-client\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562910 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-etcd-serving-ca\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562926 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-service-ca-bundle\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562944 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-serving-cert\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562959 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-service-ca\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562974 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562998 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-image-import-ca\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563015 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cad3b595-c72f-49b8-92e0-932f9f591375-serving-cert\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563030 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrz6\" (UniqueName: \"kubernetes.io/projected/cad3b595-c72f-49b8-92e0-932f9f591375-kube-api-access-bjrz6\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563044 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-config\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563061 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j58tf\" (UniqueName: \"kubernetes.io/projected/5b3e26c6-a029-4767-b371-579d2c682296-kube-api-access-j58tf\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563079 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563095 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563111 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-node-pullsecrets\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563126 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-config\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563144 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b3e26c6-a029-4767-b371-579d2c682296-auth-proxy-config\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563159 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5rsk\" (UniqueName: \"kubernetes.io/projected/03f29b26-d2bd-48e2-9804-c90a5315658c-kube-api-access-m5rsk\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563175 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8924e4db-3c47-4e66-90d1-e74e49f3a65d-config\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563191 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94wjd\" (UniqueName: \"kubernetes.io/projected/8924e4db-3c47-4e66-90d1-e74e49f3a65d-kube-api-access-94wjd\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563207 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-policies\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563221 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563236 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5b3e26c6-a029-4767-b371-579d2c682296-machine-approver-tls\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563251 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563266 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563280 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9154a093-1841-44f5-a71d-e42f5c19dfba-console-oauth-config\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563296 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-serving-cert\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563311 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-dir\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563326 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563343 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563357 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-config\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563378 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-audit-dir\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563393 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9154a093-1841-44f5-a71d-e42f5c19dfba-console-serving-cert\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563423 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjc5q\" (UniqueName: \"kubernetes.io/projected/90833a57-ccdb-452f-b86a-7741f52c5a80-kube-api-access-bjc5q\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563441 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563457 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563472 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfxnz\" (UniqueName: \"kubernetes.io/projected/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-kube-api-access-pfxnz\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563490 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qckj9\" (UniqueName: \"kubernetes.io/projected/46f4b60b-0076-4087-b541-4617c3752687-kube-api-access-qckj9\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563505 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76xnv\" (UniqueName: \"kubernetes.io/projected/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-kube-api-access-76xnv\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563518 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-etcd-client\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563533 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563546 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-config\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563569 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-oauth-serving-cert\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563583 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8924e4db-3c47-4e66-90d1-e74e49f3a65d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563606 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563622 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563637 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563651 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90833a57-ccdb-452f-b86a-7741f52c5a80-serving-cert\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563672 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-audit-policies\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563689 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-encryption-config\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563707 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b3e26c6-a029-4767-b371-579d2c682296-config\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563721 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-encryption-config\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563739 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-trusted-ca-bundle\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563755 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2chhv\" (UniqueName: \"kubernetes.io/projected/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-kube-api-access-2chhv\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563769 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563785 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nptzx\" (UniqueName: \"kubernetes.io/projected/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-kube-api-access-nptzx\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563801 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4b60b-0076-4087-b541-4617c3752687-serving-cert\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563816 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/90833a57-ccdb-452f-b86a-7741f52c5a80-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563832 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-serving-cert\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.564365 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.564388 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-audit\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.564748 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-66fqg"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.564980 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zq4gf"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.565274 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.565343 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-client-ca\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.565403 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-node-pullsecrets\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.565524 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.565641 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.566099 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-config\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.567684 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b3e26c6-a029-4767-b371-579d2c682296-auth-proxy-config\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.567887 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-etcd-serving-ca\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.568482 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b3e26c6-a029-4767-b371-579d2c682296-config\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.568515 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-config\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.568552 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-audit-dir\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.571496 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-etcd-client\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.571753 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-image-import-ca\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.572154 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bztv4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.572552 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4b60b-0076-4087-b541-4617c3752687-serving-cert\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.574467 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5b3e26c6-a029-4767-b371-579d2c682296-machine-approver-tls\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.574520 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.574856 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.574987 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.575235 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.575375 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.575527 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.575949 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-serving-cert\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.578197 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-sdz4h"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.579018 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.579472 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.580057 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.580711 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.581241 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.581880 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.582204 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.583740 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.584136 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.584855 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-encryption-config\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.584940 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.585850 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.591905 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.592226 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.592337 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.592458 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.592620 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.592730 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.592845 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.592880 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.593012 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.593101 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.596820 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.598279 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.609706 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.609858 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.610019 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.611016 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.610093 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.611781 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.611998 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.612196 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.612276 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.612818 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.612933 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.613114 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.613402 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.614293 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.614885 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fbdw8"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.615673 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.616202 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.616401 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.619518 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.620096 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.620284 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.620346 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.620707 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.621216 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.621332 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.621563 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.622687 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.624168 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.624288 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.624842 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.625345 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.625750 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.636334 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.637286 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.637659 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.637922 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.637941 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.637684 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.645568 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.646206 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.646212 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.646519 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.646909 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.647395 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.647516 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.648634 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4qc29"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.649528 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.653588 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.655470 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.655812 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.656030 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.657023 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.657261 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.657556 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.661846 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.661904 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664311 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-serving-cert\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664347 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664368 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-config\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664387 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqdn6\" (UniqueName: \"kubernetes.io/projected/230baada-7ff6-4b95-b44f-b46e54fe1375-kube-api-access-sqdn6\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664424 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttz52\" (UniqueName: \"kubernetes.io/projected/79f19c84-0217-4b08-8b4d-663096ce67b4-kube-api-access-ttz52\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664443 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-console-config\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664459 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664474 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664491 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/92112e1c-6b23-4d10-9f2b-0e33616c96f5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g4r2s\" (UID: \"92112e1c-6b23-4d10-9f2b-0e33616c96f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664509 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664524 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03f29b26-d2bd-48e2-9804-c90a5315658c-audit-dir\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664543 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8924e4db-3c47-4e66-90d1-e74e49f3a65d-images\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664559 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664582 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-trusted-ca\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664597 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td8z7\" (UniqueName: \"kubernetes.io/projected/9154a093-1841-44f5-a71d-e42f5c19dfba-kube-api-access-td8z7\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664618 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-config\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664637 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-service-ca-bundle\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664659 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-service-ca\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664673 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4j9d\" (UniqueName: \"kubernetes.io/projected/29ce863d-02cf-43c6-a249-bfef15cf04be-kube-api-access-b4j9d\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664688 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a86cb99d-3be8-4acb-98f7-87c5df66c339-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664702 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f19c84-0217-4b08-8b4d-663096ce67b4-serving-cert\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664718 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfrgj\" (UniqueName: \"kubernetes.io/projected/865ec974-02ed-4218-a599-cf69b6f0a538-kube-api-access-vfrgj\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664754 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-service-ca-bundle\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664771 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664788 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863e130d-2f68-47ef-8b6c-2871d38a2282-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664806 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tjgn\" (UniqueName: \"kubernetes.io/projected/a86cb99d-3be8-4acb-98f7-87c5df66c339-kube-api-access-2tjgn\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664835 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-serving-cert\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664850 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-service-ca\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664866 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664882 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/865ec974-02ed-4218-a599-cf69b6f0a538-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664898 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664922 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrz6\" (UniqueName: \"kubernetes.io/projected/cad3b595-c72f-49b8-92e0-932f9f591375-kube-api-access-bjrz6\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664940 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-config\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664954 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cad3b595-c72f-49b8-92e0-932f9f591375-serving-cert\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664977 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664992 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665009 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a86cb99d-3be8-4acb-98f7-87c5df66c339-images\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665031 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz2xl\" (UniqueName: \"kubernetes.io/projected/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-kube-api-access-qz2xl\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665055 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5rsk\" (UniqueName: \"kubernetes.io/projected/03f29b26-d2bd-48e2-9804-c90a5315658c-kube-api-access-m5rsk\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665074 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8924e4db-3c47-4e66-90d1-e74e49f3a65d-config\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665091 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94wjd\" (UniqueName: \"kubernetes.io/projected/8924e4db-3c47-4e66-90d1-e74e49f3a65d-kube-api-access-94wjd\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665108 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-policies\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665124 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665149 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-metrics-certs\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665164 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-ca\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665181 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665196 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-client\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665217 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863e130d-2f68-47ef-8b6c-2871d38a2282-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665238 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/230baada-7ff6-4b95-b44f-b46e54fe1375-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665275 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9154a093-1841-44f5-a71d-e42f5c19dfba-console-oauth-config\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665296 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863e130d-2f68-47ef-8b6c-2871d38a2282-config\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665328 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-dir\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665353 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-console-config\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665357 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665456 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg2rk\" (UniqueName: \"kubernetes.io/projected/8589782d-8533-4419-b9bf-115446144a39-kube-api-access-gg2rk\") pod \"migrator-59844c95c7-7nw98\" (UID: \"8589782d-8533-4419-b9bf-115446144a39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665496 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665525 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-stats-auth\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665554 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtlhj\" (UniqueName: \"kubernetes.io/projected/d597b1c7-2562-45a2-b301-14d0db548bc8-kube-api-access-xtlhj\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665581 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlkfg\" (UniqueName: \"kubernetes.io/projected/92112e1c-6b23-4d10-9f2b-0e33616c96f5-kube-api-access-qlkfg\") pod \"cluster-samples-operator-665b6dd947-g4r2s\" (UID: \"92112e1c-6b23-4d10-9f2b-0e33616c96f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665615 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdjzt\" (UniqueName: \"kubernetes.io/projected/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-kube-api-access-kdjzt\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665661 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9154a093-1841-44f5-a71d-e42f5c19dfba-console-serving-cert\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665688 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjc5q\" (UniqueName: \"kubernetes.io/projected/90833a57-ccdb-452f-b86a-7741f52c5a80-kube-api-access-bjc5q\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665715 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665743 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665766 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ce863d-02cf-43c6-a249-bfef15cf04be-serving-cert\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665823 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665848 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665874 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-t4w45"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665925 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjg6g"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.666456 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.666482 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.666503 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.666457 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03f29b26-d2bd-48e2-9804-c90a5315658c-audit-dir\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667284 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667477 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8924e4db-3c47-4e66-90d1-e74e49f3a65d-images\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667510 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-policies\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665879 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfxnz\" (UniqueName: \"kubernetes.io/projected/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-kube-api-access-pfxnz\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667605 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-client-ca\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667639 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76xnv\" (UniqueName: \"kubernetes.io/projected/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-kube-api-access-76xnv\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667661 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-etcd-client\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667679 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-default-certificate\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667686 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667695 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhqf4\" (UniqueName: \"kubernetes.io/projected/60b0275a-57b6-482d-b046-ffd270801add-kube-api-access-fhqf4\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667716 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667841 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-config\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667862 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a86cb99d-3be8-4acb-98f7-87c5df66c339-proxy-tls\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667919 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-oauth-serving-cert\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667935 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8924e4db-3c47-4e66-90d1-e74e49f3a65d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667952 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87528d59-5bdb-4e92-8d6e-062005390f6f-metrics-tls\") pod \"dns-operator-744455d44c-xgqrp\" (UID: \"87528d59-5bdb-4e92-8d6e-062005390f6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667976 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667976 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.677022 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-trusted-ca\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.677461 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.677714 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-service-ca-bundle\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.678324 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.678579 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-config\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.679584 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.680672 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.680760 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrb9\" (UniqueName: \"kubernetes.io/projected/87528d59-5bdb-4e92-8d6e-062005390f6f-kube-api-access-lcrb9\") pod \"dns-operator-744455d44c-xgqrp\" (UID: \"87528d59-5bdb-4e92-8d6e-062005390f6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.680863 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-audit-policies\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.680942 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-srv-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681078 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90833a57-ccdb-452f-b86a-7741f52c5a80-serving-cert\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681164 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-encryption-config\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681258 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-trusted-ca-bundle\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681330 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d597b1c7-2562-45a2-b301-14d0db548bc8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681443 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681525 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nptzx\" (UniqueName: \"kubernetes.io/projected/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-kube-api-access-nptzx\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.684983 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/865ec974-02ed-4218-a599-cf69b6f0a538-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.685114 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/230baada-7ff6-4b95-b44f-b46e54fe1375-proxy-tls\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.685199 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/90833a57-ccdb-452f-b86a-7741f52c5a80-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.685292 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/865ec974-02ed-4218-a599-cf69b6f0a538-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.685361 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597b1c7-2562-45a2-b301-14d0db548bc8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.679924 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.686162 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-etcd-client\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681347 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.687398 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681804 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.682530 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.684325 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-oauth-serving-cert\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.688439 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.688878 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.689212 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/90833a57-ccdb-452f-b86a-7741f52c5a80-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.684809 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-config\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.684804 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-whqd4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.689614 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.690637 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tkff4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.691121 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.691505 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.680262 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-serving-cert\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.680796 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-service-ca\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.691916 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8hgqx"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.691998 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x4ddr"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.691974 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-audit-policies\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.692273 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.690848 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-dir\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.692719 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.693182 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8924e4db-3c47-4e66-90d1-e74e49f3a65d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.693792 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9154a093-1841-44f5-a71d-e42f5c19dfba-console-oauth-config\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.693901 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.693926 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.694543 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8924e4db-3c47-4e66-90d1-e74e49f3a65d-config\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.694870 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-trusted-ca-bundle\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.695191 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90833a57-ccdb-452f-b86a-7741f52c5a80-serving-cert\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.695876 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.696082 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.696470 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j58tf\" (UniqueName: \"kubernetes.io/projected/5b3e26c6-a029-4767-b371-579d2c682296-kube-api-access-j58tf\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.697539 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cad3b595-c72f-49b8-92e0-932f9f591375-serving-cert\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.697553 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.698099 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-serving-cert\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.698163 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.698613 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xgqrp"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.698922 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.699213 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-encryption-config\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.699275 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9154a093-1841-44f5-a71d-e42f5c19dfba-console-serving-cert\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.699530 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.700476 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.701784 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.705172 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.708486 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.710443 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.711954 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.713281 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.714340 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.715205 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gmr7g"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.716120 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.716203 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hpgql"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.717206 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.718202 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k8v8n"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.719273 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.720194 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.721150 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.722136 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.723100 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zq4gf"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.724089 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xwsnp"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.725124 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.725211 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.726159 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.726325 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-66fqg"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.727115 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.728114 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.729048 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.730059 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.731276 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gmr7g"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.732466 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.732977 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.733938 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4qc29"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.735245 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjg6g"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.736287 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fbdw8"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.737286 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xwsnp"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.738329 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.738592 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.739672 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2vc59"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.740202 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.740730 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-shvm4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.741562 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.742187 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-shvm4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.758820 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786565 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-srv-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786621 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d597b1c7-2562-45a2-b301-14d0db548bc8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786671 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/865ec974-02ed-4218-a599-cf69b6f0a538-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786690 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/230baada-7ff6-4b95-b44f-b46e54fe1375-proxy-tls\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786706 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/865ec974-02ed-4218-a599-cf69b6f0a538-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786722 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597b1c7-2562-45a2-b301-14d0db548bc8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786763 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786785 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttz52\" (UniqueName: \"kubernetes.io/projected/79f19c84-0217-4b08-8b4d-663096ce67b4-kube-api-access-ttz52\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786805 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-config\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786841 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqdn6\" (UniqueName: \"kubernetes.io/projected/230baada-7ff6-4b95-b44f-b46e54fe1375-kube-api-access-sqdn6\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786879 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786915 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786934 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/92112e1c-6b23-4d10-9f2b-0e33616c96f5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g4r2s\" (UID: \"92112e1c-6b23-4d10-9f2b-0e33616c96f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786963 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-config\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787010 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-service-ca\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787026 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4j9d\" (UniqueName: \"kubernetes.io/projected/29ce863d-02cf-43c6-a249-bfef15cf04be-kube-api-access-b4j9d\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787044 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a86cb99d-3be8-4acb-98f7-87c5df66c339-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787081 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f19c84-0217-4b08-8b4d-663096ce67b4-serving-cert\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787096 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787114 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863e130d-2f68-47ef-8b6c-2871d38a2282-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787148 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tjgn\" (UniqueName: \"kubernetes.io/projected/a86cb99d-3be8-4acb-98f7-87c5df66c339-kube-api-access-2tjgn\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787158 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787167 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfrgj\" (UniqueName: \"kubernetes.io/projected/865ec974-02ed-4218-a599-cf69b6f0a538-kube-api-access-vfrgj\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787185 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-service-ca-bundle\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787202 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/865ec974-02ed-4218-a599-cf69b6f0a538-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787240 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787274 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a86cb99d-3be8-4acb-98f7-87c5df66c339-images\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787312 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz2xl\" (UniqueName: \"kubernetes.io/projected/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-kube-api-access-qz2xl\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788111 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-metrics-certs\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788132 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-ca\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788311 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a86cb99d-3be8-4acb-98f7-87c5df66c339-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788317 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-client\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788381 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863e130d-2f68-47ef-8b6c-2871d38a2282-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788401 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/230baada-7ff6-4b95-b44f-b46e54fe1375-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788442 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863e130d-2f68-47ef-8b6c-2871d38a2282-config\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788466 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg2rk\" (UniqueName: \"kubernetes.io/projected/8589782d-8533-4419-b9bf-115446144a39-kube-api-access-gg2rk\") pod \"migrator-59844c95c7-7nw98\" (UID: \"8589782d-8533-4419-b9bf-115446144a39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788489 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-stats-auth\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788508 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtlhj\" (UniqueName: \"kubernetes.io/projected/d597b1c7-2562-45a2-b301-14d0db548bc8-kube-api-access-xtlhj\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788526 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlkfg\" (UniqueName: \"kubernetes.io/projected/92112e1c-6b23-4d10-9f2b-0e33616c96f5-kube-api-access-qlkfg\") pod \"cluster-samples-operator-665b6dd947-g4r2s\" (UID: \"92112e1c-6b23-4d10-9f2b-0e33616c96f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788548 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdjzt\" (UniqueName: \"kubernetes.io/projected/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-kube-api-access-kdjzt\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788582 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ce863d-02cf-43c6-a249-bfef15cf04be-serving-cert\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788598 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788624 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788641 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-client-ca\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788663 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-default-certificate\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788679 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhqf4\" (UniqueName: \"kubernetes.io/projected/60b0275a-57b6-482d-b046-ffd270801add-kube-api-access-fhqf4\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788709 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a86cb99d-3be8-4acb-98f7-87c5df66c339-proxy-tls\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788731 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87528d59-5bdb-4e92-8d6e-062005390f6f-metrics-tls\") pod \"dns-operator-744455d44c-xgqrp\" (UID: \"87528d59-5bdb-4e92-8d6e-062005390f6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788783 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcrb9\" (UniqueName: \"kubernetes.io/projected/87528d59-5bdb-4e92-8d6e-062005390f6f-kube-api-access-lcrb9\") pod \"dns-operator-744455d44c-xgqrp\" (UID: \"87528d59-5bdb-4e92-8d6e-062005390f6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788813 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788945 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-config\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.789094 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/865ec974-02ed-4218-a599-cf69b6f0a538-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.789589 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-client-ca\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.789627 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863e130d-2f68-47ef-8b6c-2871d38a2282-config\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.789675 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.790043 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/230baada-7ff6-4b95-b44f-b46e54fe1375-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.790835 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.791190 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.791999 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ce863d-02cf-43c6-a249-bfef15cf04be-serving-cert\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.792008 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f19c84-0217-4b08-8b4d-663096ce67b4-serving-cert\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.792515 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/92112e1c-6b23-4d10-9f2b-0e33616c96f5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g4r2s\" (UID: \"92112e1c-6b23-4d10-9f2b-0e33616c96f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.792918 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-client\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.792960 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87528d59-5bdb-4e92-8d6e-062005390f6f-metrics-tls\") pod \"dns-operator-744455d44c-xgqrp\" (UID: \"87528d59-5bdb-4e92-8d6e-062005390f6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.794009 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863e130d-2f68-47ef-8b6c-2871d38a2282-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.800845 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 01 07:24:26 crc kubenswrapper[4835]: W0201 07:24:26.803514 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b3e26c6_a029_4767_b371_579d2c682296.slice/crio-16f3beecfd6587f4f93a6de28f874c2b04b37f7bab6f970a7e8163d9e1c9c34b WatchSource:0}: Error finding container 16f3beecfd6587f4f93a6de28f874c2b04b37f7bab6f970a7e8163d9e1c9c34b: Status 404 returned error can't find the container with id 16f3beecfd6587f4f93a6de28f874c2b04b37f7bab6f970a7e8163d9e1c9c34b Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.820518 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.828569 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-config\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.839789 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.848771 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-ca\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.859049 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.868446 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-service-ca\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.880246 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.900290 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.921521 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.931280 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/865ec974-02ed-4218-a599-cf69b6f0a538-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.939788 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.959050 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.999737 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qckj9\" (UniqueName: \"kubernetes.io/projected/46f4b60b-0076-4087-b541-4617c3752687-kube-api-access-qckj9\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.023061 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.031286 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2chhv\" (UniqueName: \"kubernetes.io/projected/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-kube-api-access-2chhv\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.040925 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.059704 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.075983 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.080520 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.101280 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.120431 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.140204 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.149650 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.160163 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.171807 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-metrics-certs\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.183832 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.193950 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-default-certificate\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.235617 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.235673 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.239139 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-service-ca-bundle\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.239705 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.245266 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-stats-auth\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.264195 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.280339 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.300519 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.314173 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" event={"ID":"5b3e26c6-a029-4767-b371-579d2c682296","Type":"ContainerStarted","Data":"c148ef3c7d3fd5fd5bb0f93108341f537087d34a5401d5d8334f9efa0fc966a6"} Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.314218 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" event={"ID":"5b3e26c6-a029-4767-b371-579d2c682296","Type":"ContainerStarted","Data":"16f3beecfd6587f4f93a6de28f874c2b04b37f7bab6f970a7e8163d9e1c9c34b"} Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.322263 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.339642 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.358627 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.369181 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bztv4"] Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.372471 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:27 crc kubenswrapper[4835]: W0201 07:24:27.374619 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbc68445_c2f0_43a6_a4f5_6ea9b4a37d1c.slice/crio-de74b446923947702a3ad65c60e77cb0f7508de27dc774fdcff1583059a317eb WatchSource:0}: Error finding container de74b446923947702a3ad65c60e77cb0f7508de27dc774fdcff1583059a317eb: Status 404 returned error can't find the container with id de74b446923947702a3ad65c60e77cb0f7508de27dc774fdcff1583059a317eb Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.381006 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.385110 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt"] Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.389604 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:27 crc kubenswrapper[4835]: W0201 07:24:27.391761 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f4b60b_0076_4087_b541_4617c3752687.slice/crio-eaca48a7b94d929256f67ed77a297ce26bfbe10f609a2d3253d4e4ba2b33d879 WatchSource:0}: Error finding container eaca48a7b94d929256f67ed77a297ce26bfbe10f609a2d3253d4e4ba2b33d879: Status 404 returned error can't find the container with id eaca48a7b94d929256f67ed77a297ce26bfbe10f609a2d3253d4e4ba2b33d879 Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.399236 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.419768 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.439741 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.452925 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d597b1c7-2562-45a2-b301-14d0db548bc8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.459717 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.480637 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.488455 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597b1c7-2562-45a2-b301-14d0db548bc8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.498732 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.519527 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.539595 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.559535 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.587971 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.598462 4835 request.go:700] Waited for 1.014041373s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.599647 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.620165 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.639370 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.650794 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/230baada-7ff6-4b95-b44f-b46e54fe1375-proxy-tls\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.659545 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.668453 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a86cb99d-3be8-4acb-98f7-87c5df66c339-images\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.681065 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.700282 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.722195 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a86cb99d-3be8-4acb-98f7-87c5df66c339-proxy-tls\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.738973 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.761977 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 01 07:24:27 crc kubenswrapper[4835]: E0201 07:24:27.787676 4835 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 01 07:24:27 crc kubenswrapper[4835]: E0201 07:24:27.787775 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-srv-cert podName:60b0275a-57b6-482d-b046-ffd270801add nodeName:}" failed. No retries permitted until 2026-02-01 07:24:28.287751502 +0000 UTC m=+141.408187956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-srv-cert") pod "olm-operator-6b444d44fb-p5fjs" (UID: "60b0275a-57b6-482d-b046-ffd270801add") : failed to sync secret cache: timed out waiting for the condition Feb 01 07:24:27 crc kubenswrapper[4835]: E0201 07:24:27.789099 4835 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 01 07:24:27 crc kubenswrapper[4835]: E0201 07:24:27.789152 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-profile-collector-cert podName:60b0275a-57b6-482d-b046-ffd270801add nodeName:}" failed. No retries permitted until 2026-02-01 07:24:28.289138498 +0000 UTC m=+141.409574942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-profile-collector-cert") pod "olm-operator-6b444d44fb-p5fjs" (UID: "60b0275a-57b6-482d-b046-ffd270801add") : failed to sync secret cache: timed out waiting for the condition Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.799473 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.819877 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.840238 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.860977 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.880957 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.899929 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.920210 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.940372 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.960533 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.980669 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.000667 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.039384 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.047782 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfxnz\" (UniqueName: \"kubernetes.io/projected/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-kube-api-access-pfxnz\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.059602 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.072192 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.114471 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrz6\" (UniqueName: \"kubernetes.io/projected/cad3b595-c72f-49b8-92e0-932f9f591375-kube-api-access-bjrz6\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.140657 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td8z7\" (UniqueName: \"kubernetes.io/projected/9154a093-1841-44f5-a71d-e42f5c19dfba-kube-api-access-td8z7\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.150327 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76xnv\" (UniqueName: \"kubernetes.io/projected/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-kube-api-access-76xnv\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.157921 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.164906 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.169552 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjc5q\" (UniqueName: \"kubernetes.io/projected/90833a57-ccdb-452f-b86a-7741f52c5a80-kube-api-access-bjc5q\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.183548 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.190780 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nptzx\" (UniqueName: \"kubernetes.io/projected/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-kube-api-access-nptzx\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.196717 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.199888 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.219965 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.232943 4835 csr.go:261] certificate signing request csr-8s6rb is approved, waiting to be issued Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.242205 4835 csr.go:257] certificate signing request csr-8s6rb is issued Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.252843 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.261299 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.280807 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.300017 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.318990 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.324169 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.324581 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" event={"ID":"46f4b60b-0076-4087-b541-4617c3752687","Type":"ContainerStarted","Data":"d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c"} Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.324611 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" event={"ID":"46f4b60b-0076-4087-b541-4617c3752687","Type":"ContainerStarted","Data":"eaca48a7b94d929256f67ed77a297ce26bfbe10f609a2d3253d4e4ba2b33d879"} Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.325032 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.326229 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" event={"ID":"5b3e26c6-a029-4767-b371-579d2c682296","Type":"ContainerStarted","Data":"4cb1d35e6c7f1e7a19ba678cd0e9b0a10806a65e7550f712108a7bad7aea1c82"} Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.328793 4835 generic.go:334] "Generic (PLEG): container finished" podID="bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c" containerID="2901b34895fd28d9896d7002e8236e953069596c04847e31beb93b86309f900c" exitCode=0 Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.328823 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" event={"ID":"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c","Type":"ContainerDied","Data":"2901b34895fd28d9896d7002e8236e953069596c04847e31beb93b86309f900c"} Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.328841 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" event={"ID":"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c","Type":"ContainerStarted","Data":"de74b446923947702a3ad65c60e77cb0f7508de27dc774fdcff1583059a317eb"} Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.339318 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.348545 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-srv-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.350599 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.354019 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-srv-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.354500 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.359866 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.379622 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.391085 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x4ddr"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.399867 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: W0201 07:24:28.406491 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcad3b595_c72f_49b8_92e0_932f9f591375.slice/crio-2271121ff70fd4f78d2c75e5e8c785b61155e20a1253d6b08d8d07df78c9f569 WatchSource:0}: Error finding container 2271121ff70fd4f78d2c75e5e8c785b61155e20a1253d6b08d8d07df78c9f569: Status 404 returned error can't find the container with id 2271121ff70fd4f78d2c75e5e8c785b61155e20a1253d6b08d8d07df78c9f569 Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.414126 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.437012 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5rsk\" (UniqueName: \"kubernetes.io/projected/03f29b26-d2bd-48e2-9804-c90a5315658c-kube-api-access-m5rsk\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.447620 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.454212 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94wjd\" (UniqueName: \"kubernetes.io/projected/8924e4db-3c47-4e66-90d1-e74e49f3a65d-kube-api-access-94wjd\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.459194 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.479621 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.499193 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.522047 4835 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.522397 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.541198 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.560728 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.596073 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.598607 4835 request.go:700] Waited for 1.858201758s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.599772 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.621500 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.629048 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.639563 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8hgqx"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.639783 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 01 07:24:28 crc kubenswrapper[4835]: W0201 07:24:28.645119 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90833a57_ccdb_452f_b86a_7741f52c5a80.slice/crio-86004b2c0e2950bc7fcf4234811311c6c60c4eb8bbeb3a5fbbf9c12d3ebba80e WatchSource:0}: Error finding container 86004b2c0e2950bc7fcf4234811311c6c60c4eb8bbeb3a5fbbf9c12d3ebba80e: Status 404 returned error can't find the container with id 86004b2c0e2950bc7fcf4234811311c6c60c4eb8bbeb3a5fbbf9c12d3ebba80e Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.654200 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.656219 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tkff4"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.660199 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: W0201 07:24:28.677123 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62724c3f_5c92_4e77_ba3a_0f6b7215f48a.slice/crio-b228e669bd5b200a2abbd929c9ec6fc4843ea07663488a746bc7f94dc855f949 WatchSource:0}: Error finding container b228e669bd5b200a2abbd929c9ec6fc4843ea07663488a746bc7f94dc855f949: Status 404 returned error can't find the container with id b228e669bd5b200a2abbd929c9ec6fc4843ea07663488a746bc7f94dc855f949 Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.678802 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.700194 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.712952 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-t4w45"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.724876 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.733515 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/865ec974-02ed-4218-a599-cf69b6f0a538-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.757898 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.774026 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4j9d\" (UniqueName: \"kubernetes.io/projected/29ce863d-02cf-43c6-a249-bfef15cf04be-kube-api-access-b4j9d\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.794151 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfrgj\" (UniqueName: \"kubernetes.io/projected/865ec974-02ed-4218-a599-cf69b6f0a538-kube-api-access-vfrgj\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.813605 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tjgn\" (UniqueName: \"kubernetes.io/projected/a86cb99d-3be8-4acb-98f7-87c5df66c339-kube-api-access-2tjgn\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.834028 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttz52\" (UniqueName: \"kubernetes.io/projected/79f19c84-0217-4b08-8b4d-663096ce67b4-kube-api-access-ttz52\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.854006 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-whqd4"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.857124 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.861120 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqdn6\" (UniqueName: \"kubernetes.io/projected/230baada-7ff6-4b95-b44f-b46e54fe1375-kube-api-access-sqdn6\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.863270 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.880972 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz2xl\" (UniqueName: \"kubernetes.io/projected/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-kube-api-access-qz2xl\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.892028 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.906847 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.919200 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863e130d-2f68-47ef-8b6c-2871d38a2282-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.922602 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtlhj\" (UniqueName: \"kubernetes.io/projected/d597b1c7-2562-45a2-b301-14d0db548bc8-kube-api-access-xtlhj\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.933841 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.934270 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg2rk\" (UniqueName: \"kubernetes.io/projected/8589782d-8533-4419-b9bf-115446144a39-kube-api-access-gg2rk\") pod \"migrator-59844c95c7-7nw98\" (UID: \"8589782d-8533-4419-b9bf-115446144a39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.937599 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.959608 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcrb9\" (UniqueName: \"kubernetes.io/projected/87528d59-5bdb-4e92-8d6e-062005390f6f-kube-api-access-lcrb9\") pod \"dns-operator-744455d44c-xgqrp\" (UID: \"87528d59-5bdb-4e92-8d6e-062005390f6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.965968 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.976355 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlkfg\" (UniqueName: \"kubernetes.io/projected/92112e1c-6b23-4d10-9f2b-0e33616c96f5-kube-api-access-qlkfg\") pod \"cluster-samples-operator-665b6dd947-g4r2s\" (UID: \"92112e1c-6b23-4d10-9f2b-0e33616c96f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.996711 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdjzt\" (UniqueName: \"kubernetes.io/projected/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-kube-api-access-kdjzt\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.023193 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhqf4\" (UniqueName: \"kubernetes.io/projected/60b0275a-57b6-482d-b046-ffd270801add-kube-api-access-fhqf4\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.072268 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8q47\" (UniqueName: \"kubernetes.io/projected/79c369eb-e17d-4a32-9167-934aa23fd4fc-kube-api-access-v8q47\") pod \"downloads-7954f5f757-k8v8n\" (UID: \"79c369eb-e17d-4a32-9167-934aa23fd4fc\") " pod="openshift-console/downloads-7954f5f757-k8v8n" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.072607 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.072634 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac521dca-2154-40bb-bbdb-a22e3d6abd72-installation-pull-secrets\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.072653 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-tls\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073026 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073077 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-certificates\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073095 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b9309ebb-034c-47a1-9328-62fda6feabbd-metrics-tls\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073142 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-config\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073169 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-bound-sa-token\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073200 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b4ls\" (UniqueName: \"kubernetes.io/projected/1d5a72cc-b727-4dcf-85cd-d039dc785b65-kube-api-access-7b4ls\") pod \"multus-admission-controller-857f4d67dd-fbdw8\" (UID: \"1d5a72cc-b727-4dcf-85cd-d039dc785b65\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073235 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnj8w\" (UniqueName: \"kubernetes.io/projected/a67dd2fd-8463-4887-94b7-405df03c5c0a-kube-api-access-hnj8w\") pod \"control-plane-machine-set-operator-78cbb6b69f-ngjw6\" (UID: \"a67dd2fd-8463-4887-94b7-405df03c5c0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073258 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d5a72cc-b727-4dcf-85cd-d039dc785b65-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fbdw8\" (UID: \"1d5a72cc-b727-4dcf-85cd-d039dc785b65\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073277 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7bnj\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-kube-api-access-w7bnj\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073320 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a67dd2fd-8463-4887-94b7-405df03c5c0a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ngjw6\" (UID: \"a67dd2fd-8463-4887-94b7-405df03c5c0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073354 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9309ebb-034c-47a1-9328-62fda6feabbd-trusted-ca\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073453 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmj27\" (UniqueName: \"kubernetes.io/projected/b9309ebb-034c-47a1-9328-62fda6feabbd-kube-api-access-lmj27\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073514 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073532 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac521dca-2154-40bb-bbdb-a22e3d6abd72-ca-trust-extracted\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073553 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-trusted-ca\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073606 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b9309ebb-034c-47a1-9328-62fda6feabbd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.074182 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:29.574165396 +0000 UTC m=+142.694601840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.077690 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zq4gf"] Feb 01 07:24:29 crc kubenswrapper[4835]: W0201 07:24:29.094938 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29ce863d_02cf_43c6_a249_bfef15cf04be.slice/crio-ed5e74bf81ffed845454cbb65c9397567d1c1161ae07f413f27a6ca69f988c8c WatchSource:0}: Error finding container ed5e74bf81ffed845454cbb65c9397567d1c1161ae07f413f27a6ca69f988c8c: Status 404 returned error can't find the container with id ed5e74bf81ffed845454cbb65c9397567d1c1161ae07f413f27a6ca69f988c8c Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.111100 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.124535 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.139503 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.146290 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.150114 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.175536 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.175899 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac521dca-2154-40bb-bbdb-a22e3d6abd72-installation-pull-secrets\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.175940 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-tls\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.175965 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swnqf\" (UniqueName: \"kubernetes.io/projected/8fa1edf3-e0a6-4d1a-aa61-172397ca736b-kube-api-access-swnqf\") pod \"package-server-manager-789f6589d5-9t7c7\" (UID: \"8fa1edf3-e0a6-4d1a-aa61-172397ca736b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.175987 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-srv-cert\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176068 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-socket-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176101 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-csi-data-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176176 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-profile-collector-cert\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176215 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176236 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fa1edf3-e0a6-4d1a-aa61-172397ca736b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9t7c7\" (UID: \"8fa1edf3-e0a6-4d1a-aa61-172397ca736b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176328 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/137b200e-5dcd-43c9-82e2-332071d84cb0-secret-volume\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176401 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-certificates\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176547 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b9309ebb-034c-47a1-9328-62fda6feabbd-metrics-tls\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176626 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2708d65e-6013-4f55-9492-3a3ec5529d9b-signing-cabundle\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176683 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-config\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176727 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-bound-sa-token\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176753 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176826 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l9fw\" (UniqueName: \"kubernetes.io/projected/d7c5983d-0780-410d-a88b-06063e0853c1-kube-api-access-7l9fw\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176876 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b4ls\" (UniqueName: \"kubernetes.io/projected/1d5a72cc-b727-4dcf-85cd-d039dc785b65-kube-api-access-7b4ls\") pod \"multus-admission-controller-857f4d67dd-fbdw8\" (UID: \"1d5a72cc-b727-4dcf-85cd-d039dc785b65\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176899 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176947 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnj8w\" (UniqueName: \"kubernetes.io/projected/a67dd2fd-8463-4887-94b7-405df03c5c0a-kube-api-access-hnj8w\") pod \"control-plane-machine-set-operator-78cbb6b69f-ngjw6\" (UID: \"a67dd2fd-8463-4887-94b7-405df03c5c0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176996 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d5a72cc-b727-4dcf-85cd-d039dc785b65-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fbdw8\" (UID: \"1d5a72cc-b727-4dcf-85cd-d039dc785b65\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.177066 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:29.677046008 +0000 UTC m=+142.797482442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177094 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7bnj\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-kube-api-access-w7bnj\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177122 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a67dd2fd-8463-4887-94b7-405df03c5c0a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ngjw6\" (UID: \"a67dd2fd-8463-4887-94b7-405df03c5c0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177148 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/137b200e-5dcd-43c9-82e2-332071d84cb0-config-volume\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177183 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9309ebb-034c-47a1-9328-62fda6feabbd-trusted-ca\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177205 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghbdt\" (UniqueName: \"kubernetes.io/projected/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-kube-api-access-ghbdt\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177257 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m47vr\" (UniqueName: \"kubernetes.io/projected/ac6d201a-b05d-47ab-b71f-0859b88f0024-kube-api-access-m47vr\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177433 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmj27\" (UniqueName: \"kubernetes.io/projected/b9309ebb-034c-47a1-9328-62fda6feabbd-kube-api-access-lmj27\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177489 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177513 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac521dca-2154-40bb-bbdb-a22e3d6abd72-ca-trust-extracted\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177532 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1adf70cf-02dc-4c30-9c35-6507314a4fa8-apiservice-cert\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177618 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-mountpoint-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177642 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9db65efb-d721-45dc-87a6-6ef40be6789d-metrics-tls\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177714 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-trusted-ca\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177740 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd7rd\" (UniqueName: \"kubernetes.io/projected/d18912d2-49bb-4779-9b02-fc9707e55b38-kube-api-access-bd7rd\") pod \"ingress-canary-shvm4\" (UID: \"d18912d2-49bb-4779-9b02-fc9707e55b38\") " pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177762 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjnzv\" (UniqueName: \"kubernetes.io/projected/1adf70cf-02dc-4c30-9c35-6507314a4fa8-kube-api-access-kjnzv\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177791 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9bww\" (UniqueName: \"kubernetes.io/projected/9db65efb-d721-45dc-87a6-6ef40be6789d-kube-api-access-v9bww\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177812 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxn9g\" (UniqueName: \"kubernetes.io/projected/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-kube-api-access-jxn9g\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177862 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d18912d2-49bb-4779-9b02-fc9707e55b38-cert\") pod \"ingress-canary-shvm4\" (UID: \"d18912d2-49bb-4779-9b02-fc9707e55b38\") " pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177880 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1adf70cf-02dc-4c30-9c35-6507314a4fa8-webhook-cert\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177900 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d7c5983d-0780-410d-a88b-06063e0853c1-certs\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177935 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49g4h\" (UniqueName: \"kubernetes.io/projected/137b200e-5dcd-43c9-82e2-332071d84cb0-kube-api-access-49g4h\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177969 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-serving-cert\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177990 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b9309ebb-034c-47a1-9328-62fda6feabbd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178010 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhft7\" (UniqueName: \"kubernetes.io/projected/8615180e-fc31-41b2-ad59-5ae2e48af5a2-kube-api-access-jhft7\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178026 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d7c5983d-0780-410d-a88b-06063e0853c1-node-bootstrap-token\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178073 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9db65efb-d721-45dc-87a6-6ef40be6789d-config-volume\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178092 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-registration-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178138 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1adf70cf-02dc-4c30-9c35-6507314a4fa8-tmpfs\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178154 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-plugins-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178179 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8q47\" (UniqueName: \"kubernetes.io/projected/79c369eb-e17d-4a32-9167-934aa23fd4fc-kube-api-access-v8q47\") pod \"downloads-7954f5f757-k8v8n\" (UID: \"79c369eb-e17d-4a32-9167-934aa23fd4fc\") " pod="openshift-console/downloads-7954f5f757-k8v8n" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178214 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-config\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178246 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2708d65e-6013-4f55-9492-3a3ec5529d9b-signing-key\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178332 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178374 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5rdt\" (UniqueName: \"kubernetes.io/projected/2708d65e-6013-4f55-9492-3a3ec5529d9b-kube-api-access-c5rdt\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.185010 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-certificates\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.191399 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-config\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.191934 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf"] Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.195432 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:29.695391692 +0000 UTC m=+142.815828126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.196579 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac521dca-2154-40bb-bbdb-a22e3d6abd72-ca-trust-extracted\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.197222 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9309ebb-034c-47a1-9328-62fda6feabbd-trusted-ca\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.213636 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-trusted-ca\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.232975 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d5a72cc-b727-4dcf-85cd-d039dc785b65-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fbdw8\" (UID: \"1d5a72cc-b727-4dcf-85cd-d039dc785b65\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.233300 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b9309ebb-034c-47a1-9328-62fda6feabbd-metrics-tls\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.235575 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-tls\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.238614 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.245279 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7bnj\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-kube-api-access-w7bnj\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.246356 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.247342 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.247701 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-01 07:19:28 +0000 UTC, rotation deadline is 2026-10-28 22:07:43.320729815 +0000 UTC Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.247740 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6470h43m14.072992533s for next certificate rotation Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.257032 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.257299 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac521dca-2154-40bb-bbdb-a22e3d6abd72-installation-pull-secrets\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.257948 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.263648 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a67dd2fd-8463-4887-94b7-405df03c5c0a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ngjw6\" (UID: \"a67dd2fd-8463-4887-94b7-405df03c5c0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.265970 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-bound-sa-token\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.281853 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282063 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxn9g\" (UniqueName: \"kubernetes.io/projected/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-kube-api-access-jxn9g\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282094 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d18912d2-49bb-4779-9b02-fc9707e55b38-cert\") pod \"ingress-canary-shvm4\" (UID: \"d18912d2-49bb-4779-9b02-fc9707e55b38\") " pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282117 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1adf70cf-02dc-4c30-9c35-6507314a4fa8-webhook-cert\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282141 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d7c5983d-0780-410d-a88b-06063e0853c1-certs\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282162 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49g4h\" (UniqueName: \"kubernetes.io/projected/137b200e-5dcd-43c9-82e2-332071d84cb0-kube-api-access-49g4h\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282182 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-serving-cert\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282209 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhft7\" (UniqueName: \"kubernetes.io/projected/8615180e-fc31-41b2-ad59-5ae2e48af5a2-kube-api-access-jhft7\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282230 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d7c5983d-0780-410d-a88b-06063e0853c1-node-bootstrap-token\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282248 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9db65efb-d721-45dc-87a6-6ef40be6789d-config-volume\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282275 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-registration-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282294 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1adf70cf-02dc-4c30-9c35-6507314a4fa8-tmpfs\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282313 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-plugins-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282343 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-config\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282363 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2708d65e-6013-4f55-9492-3a3ec5529d9b-signing-key\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282395 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5rdt\" (UniqueName: \"kubernetes.io/projected/2708d65e-6013-4f55-9492-3a3ec5529d9b-kube-api-access-c5rdt\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282464 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swnqf\" (UniqueName: \"kubernetes.io/projected/8fa1edf3-e0a6-4d1a-aa61-172397ca736b-kube-api-access-swnqf\") pod \"package-server-manager-789f6589d5-9t7c7\" (UID: \"8fa1edf3-e0a6-4d1a-aa61-172397ca736b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282497 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-srv-cert\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282523 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-socket-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282544 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-csi-data-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282566 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-profile-collector-cert\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282589 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fa1edf3-e0a6-4d1a-aa61-172397ca736b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9t7c7\" (UID: \"8fa1edf3-e0a6-4d1a-aa61-172397ca736b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282622 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/137b200e-5dcd-43c9-82e2-332071d84cb0-secret-volume\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282646 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2708d65e-6013-4f55-9492-3a3ec5529d9b-signing-cabundle\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282683 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282710 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l9fw\" (UniqueName: \"kubernetes.io/projected/d7c5983d-0780-410d-a88b-06063e0853c1-kube-api-access-7l9fw\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282736 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282770 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/137b200e-5dcd-43c9-82e2-332071d84cb0-config-volume\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282802 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m47vr\" (UniqueName: \"kubernetes.io/projected/ac6d201a-b05d-47ab-b71f-0859b88f0024-kube-api-access-m47vr\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282822 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghbdt\" (UniqueName: \"kubernetes.io/projected/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-kube-api-access-ghbdt\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.282973 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:29.782822637 +0000 UTC m=+142.903259071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283051 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283076 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1adf70cf-02dc-4c30-9c35-6507314a4fa8-apiservice-cert\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283101 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd7rd\" (UniqueName: \"kubernetes.io/projected/d18912d2-49bb-4779-9b02-fc9707e55b38-kube-api-access-bd7rd\") pod \"ingress-canary-shvm4\" (UID: \"d18912d2-49bb-4779-9b02-fc9707e55b38\") " pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283118 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjnzv\" (UniqueName: \"kubernetes.io/projected/1adf70cf-02dc-4c30-9c35-6507314a4fa8-kube-api-access-kjnzv\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283148 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-mountpoint-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283167 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9db65efb-d721-45dc-87a6-6ef40be6789d-metrics-tls\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283201 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9bww\" (UniqueName: \"kubernetes.io/projected/9db65efb-d721-45dc-87a6-6ef40be6789d-kube-api-access-v9bww\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283423 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-socket-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.291303 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:29.79128897 +0000 UTC m=+142.911725404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.292429 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmj27\" (UniqueName: \"kubernetes.io/projected/b9309ebb-034c-47a1-9328-62fda6feabbd-kube-api-access-lmj27\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.294345 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9db65efb-d721-45dc-87a6-6ef40be6789d-config-volume\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.294497 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-registration-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.294879 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1adf70cf-02dc-4c30-9c35-6507314a4fa8-tmpfs\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.294933 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-plugins-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.296235 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2708d65e-6013-4f55-9492-3a3ec5529d9b-signing-cabundle\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.297442 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8q47\" (UniqueName: \"kubernetes.io/projected/79c369eb-e17d-4a32-9167-934aa23fd4fc-kube-api-access-v8q47\") pod \"downloads-7954f5f757-k8v8n\" (UID: \"79c369eb-e17d-4a32-9167-934aa23fd4fc\") " pod="openshift-console/downloads-7954f5f757-k8v8n" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.297598 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d18912d2-49bb-4779-9b02-fc9707e55b38-cert\") pod \"ingress-canary-shvm4\" (UID: \"d18912d2-49bb-4779-9b02-fc9707e55b38\") " pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.298433 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d7c5983d-0780-410d-a88b-06063e0853c1-certs\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.298604 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-csi-data-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.299554 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-profile-collector-cert\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.299602 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-mountpoint-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.300167 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-srv-cert\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.300403 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1adf70cf-02dc-4c30-9c35-6507314a4fa8-apiservice-cert\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.300870 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1adf70cf-02dc-4c30-9c35-6507314a4fa8-webhook-cert\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.301248 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/137b200e-5dcd-43c9-82e2-332071d84cb0-secret-volume\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.301950 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/137b200e-5dcd-43c9-82e2-332071d84cb0-config-volume\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.302289 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d7c5983d-0780-410d-a88b-06063e0853c1-node-bootstrap-token\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.302810 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-config\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.303684 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2708d65e-6013-4f55-9492-3a3ec5529d9b-signing-key\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.306868 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-serving-cert\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.307391 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.307790 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9db65efb-d721-45dc-87a6-6ef40be6789d-metrics-tls\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.308978 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.313245 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fa1edf3-e0a6-4d1a-aa61-172397ca736b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9t7c7\" (UID: \"8fa1edf3-e0a6-4d1a-aa61-172397ca736b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.335927 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.337864 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnj8w\" (UniqueName: \"kubernetes.io/projected/a67dd2fd-8463-4887-94b7-405df03c5c0a-kube-api-access-hnj8w\") pod \"control-plane-machine-set-operator-78cbb6b69f-ngjw6\" (UID: \"a67dd2fd-8463-4887-94b7-405df03c5c0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.338200 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b9309ebb-034c-47a1-9328-62fda6feabbd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.353739 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.354378 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" event={"ID":"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1","Type":"ContainerStarted","Data":"0c8d8bf889d5b4e67fae72cc4e06aef9d04f3f8b5dd91f77a362cddcf40445dd"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.354441 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" event={"ID":"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1","Type":"ContainerStarted","Data":"0cc056cbdcfb51ec2f5356b71f8fee4b3804bf88cc6198d36f0566ef3eba9819"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.381846 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-sdz4h" event={"ID":"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d","Type":"ContainerStarted","Data":"cfdbb9382b4a307422d07dd9da4e5828e9c2347ea85b34cc139a1fdbb4a035cb"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.381905 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-sdz4h" event={"ID":"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d","Type":"ContainerStarted","Data":"231178deecefe36414e937a38e60842b4f77ff81b48615ff70990bc2a4afcd57"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.383286 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b4ls\" (UniqueName: \"kubernetes.io/projected/1d5a72cc-b727-4dcf-85cd-d039dc785b65-kube-api-access-7b4ls\") pod \"multus-admission-controller-857f4d67dd-fbdw8\" (UID: \"1d5a72cc-b727-4dcf-85cd-d039dc785b65\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.386019 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hpgql"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.386322 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.386527 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:29.886403198 +0000 UTC m=+143.006839632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.386788 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" event={"ID":"62724c3f-5c92-4e77-ba3a-0f6b7215f48a","Type":"ContainerStarted","Data":"3ce1b71be758dd076de182606cb238305ec470a936ab71da41c867e65c4d55e4"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.386840 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" event={"ID":"62724c3f-5c92-4e77-ba3a-0f6b7215f48a","Type":"ContainerStarted","Data":"b228e669bd5b200a2abbd929c9ec6fc4843ea07663488a746bc7f94dc855f949"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.387253 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.388169 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-t4w45" event={"ID":"fb0c8a64-40d8-4fff-8ca4-b573df90cd88","Type":"ContainerStarted","Data":"3bf00974b0d34ae35d2bfd61912fedaaf2ebd32b9923f911d9959c0ad49e8b0e"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.388220 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-t4w45" event={"ID":"fb0c8a64-40d8-4fff-8ca4-b573df90cd88","Type":"ContainerStarted","Data":"d51b4fc642f7c878f9877442c494fad180b69c834a01b2bad4b512a8a9ef9017"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.388943 4835 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-tkff4 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.388968 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" podUID="62724c3f-5c92-4e77-ba3a-0f6b7215f48a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.389769 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.395809 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.396070 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" event={"ID":"29ce863d-02cf-43c6-a249-bfef15cf04be","Type":"ContainerStarted","Data":"ed5e74bf81ffed845454cbb65c9397567d1c1161ae07f413f27a6ca69f988c8c"} Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.396099 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:29.896085833 +0000 UTC m=+143.016522267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.398210 4835 patch_prober.go:28] interesting pod/console-operator-58897d9998-t4w45 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.398241 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-t4w45" podUID="fb0c8a64-40d8-4fff-8ca4-b573df90cd88" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.398901 4835 generic.go:334] "Generic (PLEG): container finished" podID="90833a57-ccdb-452f-b86a-7741f52c5a80" containerID="8625565a7389eb8ce101d247c43d8245dc3db5255fb26f6c90bb912fde432587" exitCode=0 Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.398972 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" event={"ID":"90833a57-ccdb-452f-b86a-7741f52c5a80","Type":"ContainerDied","Data":"8625565a7389eb8ce101d247c43d8245dc3db5255fb26f6c90bb912fde432587"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.398991 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" event={"ID":"90833a57-ccdb-452f-b86a-7741f52c5a80","Type":"ContainerStarted","Data":"86004b2c0e2950bc7fcf4234811311c6c60c4eb8bbeb3a5fbbf9c12d3ebba80e"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.427097 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" event={"ID":"8924e4db-3c47-4e66-90d1-e74e49f3a65d","Type":"ContainerStarted","Data":"6826a1aa80ff4a7e5da8fd738d69d41ba45e2b1f073216a466b2446b1d67804b"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.427127 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" event={"ID":"8924e4db-3c47-4e66-90d1-e74e49f3a65d","Type":"ContainerStarted","Data":"1d62c0b30da0cbadfd81b94c8bdf7068b408ef05a4aad70f3bdb381e971ba966"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.433404 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k8v8n" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.435593 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhft7\" (UniqueName: \"kubernetes.io/projected/8615180e-fc31-41b2-ad59-5ae2e48af5a2-kube-api-access-jhft7\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.436576 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq"] Feb 01 07:24:29 crc kubenswrapper[4835]: W0201 07:24:29.442029 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79f19c84_0217_4b08_8b4d_663096ce67b4.slice/crio-88a43a32aeb11a7266228e44e96343168e4ad3f4bf296e26425609793a59a308 WatchSource:0}: Error finding container 88a43a32aeb11a7266228e44e96343168e4ad3f4bf296e26425609793a59a308: Status 404 returned error can't find the container with id 88a43a32aeb11a7266228e44e96343168e4ad3f4bf296e26425609793a59a308 Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.447374 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghbdt\" (UniqueName: \"kubernetes.io/projected/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-kube-api-access-ghbdt\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.448241 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8hgqx" event={"ID":"9154a093-1841-44f5-a71d-e42f5c19dfba","Type":"ContainerStarted","Data":"a348337aab744f36739678abd65aa608388ee645e9993d277c3a572b6423e421"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.448288 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8hgqx" event={"ID":"9154a093-1841-44f5-a71d-e42f5c19dfba","Type":"ContainerStarted","Data":"3f57da290e1a59ebf25ad55f2d58c4b9d8676678ad28d426555b782b4447196b"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.449945 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" event={"ID":"03f29b26-d2bd-48e2-9804-c90a5315658c","Type":"ContainerStarted","Data":"9750538fca96b0766c066bfb611cde62365bf6afe42ad480a5b8b02e34a2a487"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.457604 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" event={"ID":"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0","Type":"ContainerStarted","Data":"04064e9f1a6039a4ab3beed3cb2a5adc02aef26bc6065116b8fa4bbae7f5f049"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.463468 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" event={"ID":"cad3b595-c72f-49b8-92e0-932f9f591375","Type":"ContainerStarted","Data":"fc58fb551f0a225b076d6aed0819c29feae8a582e1526867b4698f5211360397"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.463499 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" event={"ID":"cad3b595-c72f-49b8-92e0-932f9f591375","Type":"ContainerStarted","Data":"2271121ff70fd4f78d2c75e5e8c785b61155e20a1253d6b08d8d07df78c9f569"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.473245 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9bww\" (UniqueName: \"kubernetes.io/projected/9db65efb-d721-45dc-87a6-6ef40be6789d-kube-api-access-v9bww\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.475863 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.481656 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" event={"ID":"865ec974-02ed-4218-a599-cf69b6f0a538","Type":"ContainerStarted","Data":"4e9b31596b21d04c3a40ccdc783f37b02e14876d7c408c95a469101f164236bf"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.483484 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.484899 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxn9g\" (UniqueName: \"kubernetes.io/projected/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-kube-api-access-jxn9g\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.500059 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.500226 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49g4h\" (UniqueName: \"kubernetes.io/projected/137b200e-5dcd-43c9-82e2-332071d84cb0-kube-api-access-49g4h\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.500844 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.000829234 +0000 UTC m=+143.121265668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.525229 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" event={"ID":"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c","Type":"ContainerStarted","Data":"87b9c9b22d193dcf8d26bb1e24cb0941aa1472eca81e4cb52d77be7e83a463bf"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.525266 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" event={"ID":"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c","Type":"ContainerStarted","Data":"ef5535503991c96116fd319cea061c35750484864c7e9af1184dda44676f65ff"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.528288 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5rdt\" (UniqueName: \"kubernetes.io/projected/2708d65e-6013-4f55-9492-3a3ec5529d9b-kube-api-access-c5rdt\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.529536 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.544783 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.546096 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swnqf\" (UniqueName: \"kubernetes.io/projected/8fa1edf3-e0a6-4d1a-aa61-172397ca736b-kube-api-access-swnqf\") pod \"package-server-manager-789f6589d5-9t7c7\" (UID: \"8fa1edf3-e0a6-4d1a-aa61-172397ca736b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.550438 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.568338 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.571930 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.576351 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjnzv\" (UniqueName: \"kubernetes.io/projected/1adf70cf-02dc-4c30-9c35-6507314a4fa8-kube-api-access-kjnzv\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.588605 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.591429 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: W0201 07:24:29.594535 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod230baada_7ff6_4b95_b44f_b46e54fe1375.slice/crio-daaf1da826328e0631ea38f51d220f0cf04fade3ac5661497df679efe4098dea WatchSource:0}: Error finding container daaf1da826328e0631ea38f51d220f0cf04fade3ac5661497df679efe4098dea: Status 404 returned error can't find the container with id daaf1da826328e0631ea38f51d220f0cf04fade3ac5661497df679efe4098dea Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.594679 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.600362 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd7rd\" (UniqueName: \"kubernetes.io/projected/d18912d2-49bb-4779-9b02-fc9707e55b38-kube-api-access-bd7rd\") pod \"ingress-canary-shvm4\" (UID: \"d18912d2-49bb-4779-9b02-fc9707e55b38\") " pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.604203 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.604591 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l9fw\" (UniqueName: \"kubernetes.io/projected/d7c5983d-0780-410d-a88b-06063e0853c1-kube-api-access-7l9fw\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.606231 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.106219163 +0000 UTC m=+143.226655597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.607345 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.608037 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.625541 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.646213 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.649195 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.669442 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m47vr\" (UniqueName: \"kubernetes.io/projected/ac6d201a-b05d-47ab-b71f-0859b88f0024-kube-api-access-m47vr\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.687637 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.716988 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.717352 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.217337592 +0000 UTC m=+143.337774026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.719704 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xgqrp"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.752640 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs"] Feb 01 07:24:29 crc kubenswrapper[4835]: W0201 07:24:29.815013 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87528d59_5bdb_4e92_8d6e_062005390f6f.slice/crio-e8b33f44a5103bf68688ec78819c55f87f513c0485bf13af4281fcbc8e5592cb WatchSource:0}: Error finding container e8b33f44a5103bf68688ec78819c55f87f513c0485bf13af4281fcbc8e5592cb: Status 404 returned error can't find the container with id e8b33f44a5103bf68688ec78819c55f87f513c0485bf13af4281fcbc8e5592cb Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.823162 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.826257 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.326240983 +0000 UTC m=+143.446677427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.846795 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.887985 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.892979 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.913099 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:29 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:29 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:29 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.913370 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.924044 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.924325 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.424301718 +0000 UTC m=+143.544738152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.924396 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.924922 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.424916065 +0000 UTC m=+143.545352499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.932481 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.025144 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.025697 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.525682521 +0000 UTC m=+143.646118955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.039113 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z"] Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.107285 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc"] Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.126969 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.127263 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.627253069 +0000 UTC m=+143.747689503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.230894 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.231651 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.731636331 +0000 UTC m=+143.852072765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.333842 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.334419 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.83439329 +0000 UTC m=+143.954829724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.434761 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.435359 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.935344571 +0000 UTC m=+144.055781005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.526476 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2"] Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.540540 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.540887 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.040874513 +0000 UTC m=+144.161310947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.576942 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k8v8n"] Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.592681 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" event={"ID":"865ec974-02ed-4218-a599-cf69b6f0a538","Type":"ContainerStarted","Data":"a0fab6d9e455159d489fdf15be4c4ddcbde57a5a92798a83d9a6e85cb794401a"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.593790 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" event={"ID":"d597b1c7-2562-45a2-b301-14d0db548bc8","Type":"ContainerStarted","Data":"f82723bbd3eeabc33c2405380596f767a2cbfda7b3b21dc793212ff339c7a64c"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.594355 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" event={"ID":"92112e1c-6b23-4d10-9f2b-0e33616c96f5","Type":"ContainerStarted","Data":"84904def63e019017a6ac04b6a4a875d059601712a141e29217d78b2543f4131"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.598807 4835 generic.go:334] "Generic (PLEG): container finished" podID="03f29b26-d2bd-48e2-9804-c90a5315658c" containerID="ffb133c9b412f2d348c9b6505beae9d6667bbe2a7616c009fef89ad96ac058eb" exitCode=0 Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.598847 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" event={"ID":"03f29b26-d2bd-48e2-9804-c90a5315658c","Type":"ContainerDied","Data":"ffb133c9b412f2d348c9b6505beae9d6667bbe2a7616c009fef89ad96ac058eb"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.607899 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" event={"ID":"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe","Type":"ContainerStarted","Data":"a50783c21ddc727571ad09de7a7248b3fba9c084e6adc2f62ee12b791522c8b8"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.609595 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj"] Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.612009 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" podStartSLOduration=117.611994859 podStartE2EDuration="1m57.611994859s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:30.608072185 +0000 UTC m=+143.728508619" watchObservedRunningTime="2026-02-01 07:24:30.611994859 +0000 UTC m=+143.732431293" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.617238 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" event={"ID":"8589782d-8533-4419-b9bf-115446144a39","Type":"ContainerStarted","Data":"d68ea96081a91cbc2c68481f8ec66bcb26682a6c3e8e11909233b29a55eeb908"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.631605 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" event={"ID":"79f19c84-0217-4b08-8b4d-663096ce67b4","Type":"ContainerStarted","Data":"46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.631694 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" event={"ID":"79f19c84-0217-4b08-8b4d-663096ce67b4","Type":"ContainerStarted","Data":"88a43a32aeb11a7266228e44e96343168e4ad3f4bf296e26425609793a59a308"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.631838 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.643918 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.645215 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.145197994 +0000 UTC m=+144.265634428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.648389 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" event={"ID":"8924e4db-3c47-4e66-90d1-e74e49f3a65d","Type":"ContainerStarted","Data":"97dbbec403cdf097004c054b134743aca0a923e14c41eacb5b0f64ceb3368b74"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.651470 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" event={"ID":"60b0275a-57b6-482d-b046-ffd270801add","Type":"ContainerStarted","Data":"44fdeff9b4db5725de72107a92d6616daa4dfd03e29b3f455ffcdf49c0c3d090"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.652440 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" event={"ID":"a86cb99d-3be8-4acb-98f7-87c5df66c339","Type":"ContainerStarted","Data":"836a4e99a30dc06078255d861eba18ce6360993f061c0f528c15d4ba51ec34c8"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.652463 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" event={"ID":"a86cb99d-3be8-4acb-98f7-87c5df66c339","Type":"ContainerStarted","Data":"579fd3a8b9da3927d34eedf1e5b918879be38fa019c6f33ffd06c053ee0996cf"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.653529 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" event={"ID":"1adf70cf-02dc-4c30-9c35-6507314a4fa8","Type":"ContainerStarted","Data":"9f67f0cf8faaaf9cb7d8e5b78ebab084593afb4b59d7daa5df0a7a15802ec1f9"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.664025 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" event={"ID":"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0","Type":"ContainerStarted","Data":"80b5c7e9e0d040ec563b62966f2af8084efe396a0d8a11d4f5ec0724b439cf3e"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.664857 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2vc59" event={"ID":"d7c5983d-0780-410d-a88b-06063e0853c1","Type":"ContainerStarted","Data":"91bc94d571f384c937a2764c3bf071a836490dd42722ffdecdec5838001dc378"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.671466 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" event={"ID":"29ce863d-02cf-43c6-a249-bfef15cf04be","Type":"ContainerStarted","Data":"fba4b5a44391325820f99ce8185b2d3a3d092896e46650dbf0eb4db7c4061b19"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.681847 4835 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hpgql container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.681901 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" podUID="79f19c84-0217-4b08-8b4d-663096ce67b4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.716432 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" event={"ID":"90833a57-ccdb-452f-b86a-7741f52c5a80","Type":"ContainerStarted","Data":"0f1d8581f1c88d783d60eebe5654895b6af72c09b306a277f76195a06116b890"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.720103 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.738633 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" event={"ID":"863e130d-2f68-47ef-8b6c-2871d38a2282","Type":"ContainerStarted","Data":"05ed26f845aaac5c630e08f419c563f115897f945e20c4def0d966f253b5549c"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.745297 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.751516 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.251500176 +0000 UTC m=+144.371936610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.760591 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" event={"ID":"87528d59-5bdb-4e92-8d6e-062005390f6f","Type":"ContainerStarted","Data":"e8b33f44a5103bf68688ec78819c55f87f513c0485bf13af4281fcbc8e5592cb"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.808946 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8hgqx" podStartSLOduration=118.80892847 podStartE2EDuration="1m58.80892847s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:30.806800714 +0000 UTC m=+143.927237148" watchObservedRunningTime="2026-02-01 07:24:30.80892847 +0000 UTC m=+143.929364904" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.809151 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" event={"ID":"230baada-7ff6-4b95-b44f-b46e54fe1375","Type":"ContainerStarted","Data":"daaf1da826328e0631ea38f51d220f0cf04fade3ac5661497df679efe4098dea"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.846921 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.850124 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.350087525 +0000 UTC m=+144.470523959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.886334 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" podStartSLOduration=118.886317451 podStartE2EDuration="1m58.886317451s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:30.884859762 +0000 UTC m=+144.005296196" watchObservedRunningTime="2026-02-01 07:24:30.886317451 +0000 UTC m=+144.006753875" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.897530 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gmr7g"] Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.949094 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.952245 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.452224378 +0000 UTC m=+144.572660812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.974628 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.982151 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:30 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:30 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:30 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.982360 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.049783 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.050126 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.550112719 +0000 UTC m=+144.670549153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.138663 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" podStartSLOduration=119.138639962 podStartE2EDuration="1m59.138639962s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.096613074 +0000 UTC m=+144.217049508" watchObservedRunningTime="2026-02-01 07:24:31.138639962 +0000 UTC m=+144.259076396" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.138964 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" podStartSLOduration=119.13895859 podStartE2EDuration="1m59.13895859s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.130504367 +0000 UTC m=+144.250940791" watchObservedRunningTime="2026-02-01 07:24:31.13895859 +0000 UTC m=+144.259395024" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.154473 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.154767 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.654755047 +0000 UTC m=+144.775191481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.155290 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" podStartSLOduration=119.1552699 podStartE2EDuration="1m59.1552699s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.154801258 +0000 UTC m=+144.275237702" watchObservedRunningTime="2026-02-01 07:24:31.1552699 +0000 UTC m=+144.275706334" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.174427 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.256908 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.257765 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.757750292 +0000 UTC m=+144.878186716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.315496 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xwsnp"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.354254 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.355422 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-t4w45" podStartSLOduration=119.355391786 podStartE2EDuration="1m59.355391786s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.354689578 +0000 UTC m=+144.475126042" watchObservedRunningTime="2026-02-01 07:24:31.355391786 +0000 UTC m=+144.475828220" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.362972 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.363301 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.863283444 +0000 UTC m=+144.983719878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.385447 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-shvm4"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.386119 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7"] Feb 01 07:24:31 crc kubenswrapper[4835]: W0201 07:24:31.405116 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod137b200e_5dcd_43c9_82e2_332071d84cb0.slice/crio-42603d073e4ca627863068ad7515b8856291ec8428ed3ebba7f5fa565c3a76d5 WatchSource:0}: Error finding container 42603d073e4ca627863068ad7515b8856291ec8428ed3ebba7f5fa565c3a76d5: Status 404 returned error can't find the container with id 42603d073e4ca627863068ad7515b8856291ec8428ed3ebba7f5fa565c3a76d5 Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.415811 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4qc29"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.453487 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.457739 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.487667 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fbdw8"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.495586 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.496303 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.99628132 +0000 UTC m=+145.116717744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.499788 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjg6g"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.501037 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-sdz4h" podStartSLOduration=119.501028165 podStartE2EDuration="1m59.501028165s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.493431775 +0000 UTC m=+144.613868229" watchObservedRunningTime="2026-02-01 07:24:31.501028165 +0000 UTC m=+144.621464599" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.551895 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" podStartSLOduration=119.551882196 podStartE2EDuration="1m59.551882196s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.549546035 +0000 UTC m=+144.669982479" watchObservedRunningTime="2026-02-01 07:24:31.551882196 +0000 UTC m=+144.672318630" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.588609 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.596891 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.597192 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.09718167 +0000 UTC m=+145.217618104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.598146 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" podStartSLOduration=119.598130385 podStartE2EDuration="1m59.598130385s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.589483787 +0000 UTC m=+144.709920221" watchObservedRunningTime="2026-02-01 07:24:31.598130385 +0000 UTC m=+144.718566819" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.688028 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" podStartSLOduration=119.687992705 podStartE2EDuration="1m59.687992705s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.682054858 +0000 UTC m=+144.802491302" watchObservedRunningTime="2026-02-01 07:24:31.687992705 +0000 UTC m=+144.808429139" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.702249 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.702530 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.202515637 +0000 UTC m=+145.322952071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.792766 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" podStartSLOduration=119.792726366 podStartE2EDuration="1m59.792726366s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.764400939 +0000 UTC m=+144.884837383" watchObservedRunningTime="2026-02-01 07:24:31.792726366 +0000 UTC m=+144.913162800" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.811504 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.811835 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.311820459 +0000 UTC m=+145.432256893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.824441 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" event={"ID":"2708d65e-6013-4f55-9492-3a3ec5529d9b","Type":"ContainerStarted","Data":"68752236067db4daa25ed6dad49be45b5927ec7f0c7dabea55b260a167d003e7"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.828472 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" podStartSLOduration=119.828461708 podStartE2EDuration="1m59.828461708s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.827193114 +0000 UTC m=+144.947629548" watchObservedRunningTime="2026-02-01 07:24:31.828461708 +0000 UTC m=+144.948898142" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.829575 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" podStartSLOduration=119.829567957 podStartE2EDuration="1m59.829567957s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.793611329 +0000 UTC m=+144.914047763" watchObservedRunningTime="2026-02-01 07:24:31.829567957 +0000 UTC m=+144.950004381" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.836799 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" event={"ID":"b9309ebb-034c-47a1-9328-62fda6feabbd","Type":"ContainerStarted","Data":"34c45a02e198c151baf79c6bd9ea077b132518649f498984b1e6edc4d52e38af"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.836843 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" event={"ID":"b9309ebb-034c-47a1-9328-62fda6feabbd","Type":"ContainerStarted","Data":"902927a557ceae22017b09801beb858c2156478a96d2475071eea4cdead37291"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.849461 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" event={"ID":"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe","Type":"ContainerStarted","Data":"45bfc4d84532a62ff8085f9c09c0be824a7a4582ab463f16cdf3f5794b587e23"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.859931 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" event={"ID":"8589782d-8533-4419-b9bf-115446144a39","Type":"ContainerStarted","Data":"8ff3274645aac068a73a6dd08b164a26ce7d623f6f8b3154e10e7315f5707261"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.859972 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" event={"ID":"8589782d-8533-4419-b9bf-115446144a39","Type":"ContainerStarted","Data":"34bf34fa75c6a937a871093522648f070a9aaff3a3f14d50555984d54d2dc781"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.862036 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" event={"ID":"a86cb99d-3be8-4acb-98f7-87c5df66c339","Type":"ContainerStarted","Data":"52d86496d8b51eeb24c85722fdab7a4b2e02fa19d13f64b3364dc684926182c1"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.865146 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" event={"ID":"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2","Type":"ContainerStarted","Data":"d0f923b266c0e02367584fffa4072913a3a0674a08e4c883dd5e6d0420893cf9"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.866570 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" event={"ID":"137b200e-5dcd-43c9-82e2-332071d84cb0","Type":"ContainerStarted","Data":"42603d073e4ca627863068ad7515b8856291ec8428ed3ebba7f5fa565c3a76d5"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.870788 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" podStartSLOduration=119.870772373 podStartE2EDuration="1m59.870772373s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.867310662 +0000 UTC m=+144.987747096" watchObservedRunningTime="2026-02-01 07:24:31.870772373 +0000 UTC m=+144.991208817" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.874076 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" event={"ID":"1adf70cf-02dc-4c30-9c35-6507314a4fa8","Type":"ContainerStarted","Data":"9497174ae637109963bc6730afc85375c0d536a7ac88093bf3498002c11eb52f"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.875264 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.879258 4835 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q45cc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.879294 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" podUID="1adf70cf-02dc-4c30-9c35-6507314a4fa8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.884904 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k8v8n" event={"ID":"79c369eb-e17d-4a32-9167-934aa23fd4fc","Type":"ContainerStarted","Data":"a9b95c2516ea1eabe650c1202217a4a89526c836103934183929279617805fc6"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.884939 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k8v8n" event={"ID":"79c369eb-e17d-4a32-9167-934aa23fd4fc","Type":"ContainerStarted","Data":"bb716b6d8958f78a4275e57edcbac3cb15c220499f25088429bb8a8d6d5387bc"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.885440 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-k8v8n" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.887824 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" event={"ID":"92112e1c-6b23-4d10-9f2b-0e33616c96f5","Type":"ContainerStarted","Data":"fb0b1d1b79fd3893cb8f2c62f378e09b996faf79f605171f4ad50b84dbf9d01f"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.888235 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8v8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.888269 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k8v8n" podUID="79c369eb-e17d-4a32-9167-934aa23fd4fc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.890354 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" event={"ID":"03f29b26-d2bd-48e2-9804-c90a5315658c","Type":"ContainerStarted","Data":"500efa79ef1199130e63fdd3b869fb6281b92c3f15f0f107143951cc15ae6a54"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.893643 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" event={"ID":"863e130d-2f68-47ef-8b6c-2871d38a2282","Type":"ContainerStarted","Data":"18d37d8e45f49ce4cf4dd8ee9eba9e125d812be79e3bfd7cb1db8ec39b52f7fb"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.894742 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:31 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:31 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:31 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.894780 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.895945 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" event={"ID":"8fa1edf3-e0a6-4d1a-aa61-172397ca736b","Type":"ContainerStarted","Data":"21f9211dadfb31994ca7f72cf7d9d116a26133f97e51e7c5f56079086f180d06"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.897308 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2vc59" event={"ID":"d7c5983d-0780-410d-a88b-06063e0853c1","Type":"ContainerStarted","Data":"8d7e23c3615eeb7d20bbb7bd4bc75d40abef226640be2b1ea7f935bf7023ec6d"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.898529 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" event={"ID":"8615180e-fc31-41b2-ad59-5ae2e48af5a2","Type":"ContainerStarted","Data":"756ac183cdf318bae9818cbd3f3e4f67346c6974661fa7194394a92f9755088e"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.900056 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" event={"ID":"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb","Type":"ContainerStarted","Data":"7e3f187c0d5740afc8abd1fba600a581ae1f40b6007c292cf372af7333a6e571"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.900079 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" event={"ID":"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb","Type":"ContainerStarted","Data":"0baa23da9c1e34ce11481805880843d424a6c693b3b430d5d88f755bed846eac"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.901173 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" event={"ID":"60b0275a-57b6-482d-b046-ffd270801add","Type":"ContainerStarted","Data":"b680a823fc58f2fc572df89b86270e1e92f77caca75736248ef8021a98647306"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.901331 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.902573 4835 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-p5fjs container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.902607 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" podUID="60b0275a-57b6-482d-b046-ffd270801add" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.904620 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gmr7g" event={"ID":"9db65efb-d721-45dc-87a6-6ef40be6789d","Type":"ContainerStarted","Data":"a78ec2f4a3d7af98fa72594c73af17af831ae788fbcaca6dc1d60924281c8a26"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.904671 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gmr7g" event={"ID":"9db65efb-d721-45dc-87a6-6ef40be6789d","Type":"ContainerStarted","Data":"45c6d1f550f80c91e3f97631abbe801f186c623ed96faf0ad8e749a3db4d4059"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.906574 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" event={"ID":"1d5a72cc-b727-4dcf-85cd-d039dc785b65","Type":"ContainerStarted","Data":"7bf69ff9b086d3ab804b8474505b9e9ec776906b696cbf8354574eacdea008b9"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.910186 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" event={"ID":"87528d59-5bdb-4e92-8d6e-062005390f6f","Type":"ContainerStarted","Data":"97a783040f0e0e229bc6e7b0fbac7de4489d3ced77713afe5be1277fc9812001"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.912313 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.912428 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.4123919 +0000 UTC m=+145.532828334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.913320 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" event={"ID":"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87","Type":"ContainerStarted","Data":"62a0ffdd73fe6b4dcb10967bb2153470901670653ea73aea1bdc348653b1df73"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.913541 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.913926 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.413913801 +0000 UTC m=+145.534350235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.933767 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" event={"ID":"230baada-7ff6-4b95-b44f-b46e54fe1375","Type":"ContainerStarted","Data":"20df8a69a62cd6c0bf4f5b7e6a30fa0331596600892443bcd5207a2cda8ec740"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.933808 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" event={"ID":"230baada-7ff6-4b95-b44f-b46e54fe1375","Type":"ContainerStarted","Data":"b5fa1f0c353b2e821d299637df5ca8511d07ab552242aac524d7494f3b468896"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.935680 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" event={"ID":"d597b1c7-2562-45a2-b301-14d0db548bc8","Type":"ContainerStarted","Data":"44361d82b8573e1975cd63e85890433ba380a08b2517bbecdcd75ca66f8b32ac"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.937699 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" event={"ID":"a67dd2fd-8463-4887-94b7-405df03c5c0a","Type":"ContainerStarted","Data":"eac7af213b671e8be276b4fee8f443830786071d41b22ae8b90397f3b0465f31"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.938696 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-shvm4" event={"ID":"d18912d2-49bb-4779-9b02-fc9707e55b38","Type":"ContainerStarted","Data":"7d325207245525c92020b9e53fd076f104c78ed508360aff40003de0377e4310"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.948603 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" event={"ID":"ac6d201a-b05d-47ab-b71f-0859b88f0024","Type":"ContainerStarted","Data":"29c32d2386fad600c22aaa50e690ade449039c568fa6280035cdf3cd047811e8"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.953725 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.990792 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" podStartSLOduration=118.990765417 podStartE2EDuration="1m58.990765417s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.98939063 +0000 UTC m=+145.109827064" watchObservedRunningTime="2026-02-01 07:24:31.990765417 +0000 UTC m=+145.111201851" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.016368 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.018016 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.518000855 +0000 UTC m=+145.638437289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.034436 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" podStartSLOduration=119.034420938 podStartE2EDuration="1m59.034420938s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.032008984 +0000 UTC m=+145.152445418" watchObservedRunningTime="2026-02-01 07:24:32.034420938 +0000 UTC m=+145.154857372" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.078488 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.078839 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.118377 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.118661 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.618649108 +0000 UTC m=+145.739085542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.126003 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" podStartSLOduration=120.125980861 podStartE2EDuration="2m0.125980861s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.125504689 +0000 UTC m=+145.245941123" watchObservedRunningTime="2026-02-01 07:24:32.125980861 +0000 UTC m=+145.246417295" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.127686 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2vc59" podStartSLOduration=6.127678676 podStartE2EDuration="6.127678676s" podCreationTimestamp="2026-02-01 07:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.079148957 +0000 UTC m=+145.199585391" watchObservedRunningTime="2026-02-01 07:24:32.127678676 +0000 UTC m=+145.248115110" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.155688 4835 patch_prober.go:28] interesting pod/apiserver-76f77b778f-bztv4 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]log ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]etcd ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/generic-apiserver-start-informers ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/max-in-flight-filter ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 01 07:24:32 crc kubenswrapper[4835]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 01 07:24:32 crc kubenswrapper[4835]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/project.openshift.io-projectcache ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/openshift.io-startinformers ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 01 07:24:32 crc kubenswrapper[4835]: livez check failed Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.156277 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" podUID="bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.219430 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.220555 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.720531664 +0000 UTC m=+145.840968148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.248570 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-k8v8n" podStartSLOduration=120.248557023 podStartE2EDuration="2m0.248557023s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.246338784 +0000 UTC m=+145.366775218" watchObservedRunningTime="2026-02-01 07:24:32.248557023 +0000 UTC m=+145.368993457" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.248873 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" podStartSLOduration=120.248869381 podStartE2EDuration="2m0.248869381s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.198702719 +0000 UTC m=+145.319139153" watchObservedRunningTime="2026-02-01 07:24:32.248869381 +0000 UTC m=+145.369305815" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.269301 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" podStartSLOduration=120.269282929 podStartE2EDuration="2m0.269282929s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.266205168 +0000 UTC m=+145.386641602" watchObservedRunningTime="2026-02-01 07:24:32.269282929 +0000 UTC m=+145.389719363" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.309796 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" podStartSLOduration=119.309763997 podStartE2EDuration="1m59.309763997s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.309355866 +0000 UTC m=+145.429792300" watchObservedRunningTime="2026-02-01 07:24:32.309763997 +0000 UTC m=+145.430200431" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.322083 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.322636 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.822615875 +0000 UTC m=+145.943052309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.350064 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" podStartSLOduration=120.350032618 podStartE2EDuration="2m0.350032618s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.349261248 +0000 UTC m=+145.469697702" watchObservedRunningTime="2026-02-01 07:24:32.350032618 +0000 UTC m=+145.470469052" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.423753 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.424148 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.923976338 +0000 UTC m=+146.044412772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.424248 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.424602 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.924589894 +0000 UTC m=+146.045026328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.436147 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" podStartSLOduration=120.436130988 podStartE2EDuration="2m0.436130988s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.430762287 +0000 UTC m=+145.551198711" watchObservedRunningTime="2026-02-01 07:24:32.436130988 +0000 UTC m=+145.556567422" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.436796 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" podStartSLOduration=119.436791496 podStartE2EDuration="1m59.436791496s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.399474702 +0000 UTC m=+145.519911136" watchObservedRunningTime="2026-02-01 07:24:32.436791496 +0000 UTC m=+145.557227930" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.525768 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.526266 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.026248004 +0000 UTC m=+146.146684438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.627574 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.627907 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.127896034 +0000 UTC m=+146.248332468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.729084 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.729151 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.229137963 +0000 UTC m=+146.349574397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.730051 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.730443 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.230426717 +0000 UTC m=+146.350863151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.830821 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.831120 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.331104561 +0000 UTC m=+146.451540995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.903843 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:32 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:32 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:32 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.903908 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.932010 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.932332 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.432321029 +0000 UTC m=+146.552757463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.959310 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" event={"ID":"2708d65e-6013-4f55-9492-3a3ec5529d9b","Type":"ContainerStarted","Data":"1fa9cace0d69d21718b39696c686f85b3be7f0345f914ae3d9a34bad8ad4a720"} Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.960828 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gmr7g" event={"ID":"9db65efb-d721-45dc-87a6-6ef40be6789d","Type":"ContainerStarted","Data":"7410aee2264cb726d977a414fd7ee98edbca424f416ffa6ee95fe55527e928aa"} Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.960939 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.965770 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" event={"ID":"92112e1c-6b23-4d10-9f2b-0e33616c96f5","Type":"ContainerStarted","Data":"a4ba8c26df926b27f7caeb56e4ddfd0a81bc628464bdbe1d1c3aa525acde89ee"} Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.967732 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-shvm4" event={"ID":"d18912d2-49bb-4779-9b02-fc9707e55b38","Type":"ContainerStarted","Data":"b1e8073e200344862f088de57d95ee584f40f1ca870f3606f943921a94eef26b"} Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.970829 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" event={"ID":"137b200e-5dcd-43c9-82e2-332071d84cb0","Type":"ContainerStarted","Data":"98c793df94b793188e86124f6ff1a8161f18d725c6666c0e72eb3d6113d10246"} Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.984808 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" event={"ID":"87528d59-5bdb-4e92-8d6e-062005390f6f","Type":"ContainerStarted","Data":"5c4e1c79dc0d0b672a12fe1fa78b8c2ca579c25b8fdff91205e2ce6b414d999a"} Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.987958 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" event={"ID":"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87","Type":"ContainerStarted","Data":"80c14e64102bed211e8ffd95ca8632fc5102b3a423d5991aff48a1918f7f78f9"} Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.988216 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.989318 4835 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7ngw7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.989358 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" podUID="6fa37cd2-a8e5-4624-91e2-6d249bdb7c87" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.989605 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" podStartSLOduration=119.989593669 podStartE2EDuration="1m59.989593669s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.987207136 +0000 UTC m=+146.107643560" watchObservedRunningTime="2026-02-01 07:24:32.989593669 +0000 UTC m=+146.110030103" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.989995 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" podStartSLOduration=120.98998875 podStartE2EDuration="2m0.98998875s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.469846317 +0000 UTC m=+145.590282751" watchObservedRunningTime="2026-02-01 07:24:32.98998875 +0000 UTC m=+146.110425184" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.995170 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" event={"ID":"b9309ebb-034c-47a1-9328-62fda6feabbd","Type":"ContainerStarted","Data":"4c493a11eab89aefcb6bfd875f74f37469eb8583621ec363e3197c1f05f4cf83"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.000954 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" event={"ID":"a67dd2fd-8463-4887-94b7-405df03c5c0a","Type":"ContainerStarted","Data":"0025c9036568250e9cd5742f8d6f745265094704dd04fca029716a2aa49bcab7"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.004645 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" event={"ID":"8fa1edf3-e0a6-4d1a-aa61-172397ca736b","Type":"ContainerStarted","Data":"81c0b5887294570dc667b6cb89b8b18fd90478ace13816a814b9986ebe7391b6"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.004813 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" event={"ID":"8fa1edf3-e0a6-4d1a-aa61-172397ca736b","Type":"ContainerStarted","Data":"b2c0143f0dfa31948fd52363889541c1e023e20062ecf14fdc66c741240f8954"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.004893 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.019948 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" event={"ID":"8615180e-fc31-41b2-ad59-5ae2e48af5a2","Type":"ContainerStarted","Data":"aec701259e552f23dfcf4e9cf051bfbdb52a72d9c0db034b350a2330451e632f"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.021022 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.028565 4835 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mjg6g container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.028663 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.036482 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" event={"ID":"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2","Type":"ContainerStarted","Data":"d3c37d18f88bd6af013df8df81226f02096cf5fd27355056162bc199d4d23fec"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.036910 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.037963 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.537942174 +0000 UTC m=+146.658378608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.061329 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" event={"ID":"1d5a72cc-b727-4dcf-85cd-d039dc785b65","Type":"ContainerStarted","Data":"ffe4a8bb29d3c3f4ed655ac9b373fd3387c5fd2915632b7f22de512845fe8612"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.061606 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" event={"ID":"1d5a72cc-b727-4dcf-85cd-d039dc785b65","Type":"ContainerStarted","Data":"ce0b94d7bd2530d394066be64acc80826a336507fdc5fabcf3b53f87467f7666"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.064153 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8v8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.064185 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k8v8n" podUID="79c369eb-e17d-4a32-9167-934aa23fd4fc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.064447 4835 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-p5fjs container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.064552 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" podUID="60b0275a-57b6-482d-b046-ffd270801add" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.064712 4835 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q45cc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.064769 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" podUID="1adf70cf-02dc-4c30-9c35-6507314a4fa8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.088728 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-shvm4" podStartSLOduration=7.088708882 podStartE2EDuration="7.088708882s" podCreationTimestamp="2026-02-01 07:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.031271868 +0000 UTC m=+146.151708302" watchObservedRunningTime="2026-02-01 07:24:33.088708882 +0000 UTC m=+146.209145316" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.090558 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gmr7g" podStartSLOduration=7.090552111 podStartE2EDuration="7.090552111s" podCreationTimestamp="2026-02-01 07:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.088645611 +0000 UTC m=+146.209082045" watchObservedRunningTime="2026-02-01 07:24:33.090552111 +0000 UTC m=+146.210988545" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.139461 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.144138 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.644125363 +0000 UTC m=+146.764561797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.149617 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" podStartSLOduration=121.149602238 podStartE2EDuration="2m1.149602238s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.123867879 +0000 UTC m=+146.244304313" watchObservedRunningTime="2026-02-01 07:24:33.149602238 +0000 UTC m=+146.270038672" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.211497 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" podStartSLOduration=121.211479669 podStartE2EDuration="2m1.211479669s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.150809129 +0000 UTC m=+146.271245583" watchObservedRunningTime="2026-02-01 07:24:33.211479669 +0000 UTC m=+146.331916103" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.237093 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" podStartSLOduration=121.237072993 podStartE2EDuration="2m1.237072993s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.210925284 +0000 UTC m=+146.331361718" watchObservedRunningTime="2026-02-01 07:24:33.237072993 +0000 UTC m=+146.357509427" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.243900 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.244242 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.744227362 +0000 UTC m=+146.864663796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.278631 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" podStartSLOduration=121.278617059 podStartE2EDuration="2m1.278617059s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.272007125 +0000 UTC m=+146.392443559" watchObservedRunningTime="2026-02-01 07:24:33.278617059 +0000 UTC m=+146.399053493" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.278938 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" podStartSLOduration=120.278934647 podStartE2EDuration="2m0.278934647s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.239138138 +0000 UTC m=+146.359574562" watchObservedRunningTime="2026-02-01 07:24:33.278934647 +0000 UTC m=+146.399371081" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.298025 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" podStartSLOduration=120.29801007 podStartE2EDuration="2m0.29801007s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.297293571 +0000 UTC m=+146.417730005" watchObservedRunningTime="2026-02-01 07:24:33.29801007 +0000 UTC m=+146.418446504" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.326034 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" podStartSLOduration=120.326019878 podStartE2EDuration="2m0.326019878s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.324829127 +0000 UTC m=+146.445265571" watchObservedRunningTime="2026-02-01 07:24:33.326019878 +0000 UTC m=+146.446456322" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.346094 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.346529 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.846514739 +0000 UTC m=+146.966951173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.395010 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" podStartSLOduration=121.394994527 podStartE2EDuration="2m1.394994527s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.364983635 +0000 UTC m=+146.485420069" watchObservedRunningTime="2026-02-01 07:24:33.394994527 +0000 UTC m=+146.515430961" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.396918 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" podStartSLOduration=121.396912987 podStartE2EDuration="2m1.396912987s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.394072752 +0000 UTC m=+146.514509186" watchObservedRunningTime="2026-02-01 07:24:33.396912987 +0000 UTC m=+146.517349421" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.447000 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.447181 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.947155212 +0000 UTC m=+147.067591646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.447382 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.447741 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.947731237 +0000 UTC m=+147.068167671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.548752 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.548950 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.048923275 +0000 UTC m=+147.169359709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.550050 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.550343 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.050332712 +0000 UTC m=+147.170769146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.651142 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.651363 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.151329705 +0000 UTC m=+147.271766159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.651649 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.651970 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.151961792 +0000 UTC m=+147.272398226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.725365 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.725727 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.752663 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.753101 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.253086688 +0000 UTC m=+147.373523122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.854077 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.854381 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.354367578 +0000 UTC m=+147.474804012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.897790 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:33 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:33 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:33 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.897837 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.955521 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.955806 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.455790962 +0000 UTC m=+147.576227396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.057026 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.057557 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.557545674 +0000 UTC m=+147.677982108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.066730 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" event={"ID":"ac6d201a-b05d-47ab-b71f-0859b88f0024","Type":"ContainerStarted","Data":"b2e5af8ce77456d4131584133f8d5c138df4117bb9cb6d6c90baaf9d40c354e0"} Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.066783 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" event={"ID":"ac6d201a-b05d-47ab-b71f-0859b88f0024","Type":"ContainerStarted","Data":"00737887f065d25ed96621f33158323f8d0660e440b7ef398fe15c9a4089207a"} Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.066793 4835 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mjg6g container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.066827 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.089735 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.141316 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" podStartSLOduration=121.141297932 podStartE2EDuration="2m1.141297932s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.438293478 +0000 UTC m=+146.558729912" watchObservedRunningTime="2026-02-01 07:24:34.141297932 +0000 UTC m=+147.261734376" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.159913 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.161302 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.661288049 +0000 UTC m=+147.781724483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.183300 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.217595 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.221600 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.266707 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.267104 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.767089878 +0000 UTC m=+147.887526302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.368114 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.368277 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.868253465 +0000 UTC m=+147.988689889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.368351 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.368862 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.868844661 +0000 UTC m=+147.989281155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.469315 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.469511 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.969484885 +0000 UTC m=+148.089921319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.469567 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.469869 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.969856824 +0000 UTC m=+148.090293258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.556864 4835 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.570608 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.570790 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:35.070767905 +0000 UTC m=+148.191204339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.671868 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.672154 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:35.172141386 +0000 UTC m=+148.292577820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.725314 4835 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-01T07:24:34.557057523Z","Handler":null,"Name":""} Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.743705 4835 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.743741 4835 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.772535 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.799682 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.873982 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.898855 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.898895 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.899626 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:34 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:34 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:34 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.899663 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.072850 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" event={"ID":"ac6d201a-b05d-47ab-b71f-0859b88f0024","Type":"ContainerStarted","Data":"90a4a04442a02eaea193885caae199909448634bbf117c7dbc60ef00386e3102"} Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.072902 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" event={"ID":"ac6d201a-b05d-47ab-b71f-0859b88f0024","Type":"ContainerStarted","Data":"a401e08e1ac424cd796677f071dc26d5b98c279278d6c953705554f700c3f702"} Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.073327 4835 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mjg6g container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.073364 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.084750 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.090704 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.103284 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" podStartSLOduration=9.103265772 podStartE2EDuration="9.103265772s" podCreationTimestamp="2026-02-01 07:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:35.100760656 +0000 UTC m=+148.221197090" watchObservedRunningTime="2026-02-01 07:24:35.103265772 +0000 UTC m=+148.223702206" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.169854 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.374183 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t677t"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.375045 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.381938 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.384447 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t677t"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.399108 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-66fqg"] Feb 01 07:24:35 crc kubenswrapper[4835]: W0201 07:24:35.403559 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac521dca_2154_40bb_bbdb_a22e3d6abd72.slice/crio-7009647035bcb9b3d9a9385f910f574abe92ca7bc6f2836a8743b47eb765ed4a WatchSource:0}: Error finding container 7009647035bcb9b3d9a9385f910f574abe92ca7bc6f2836a8743b47eb765ed4a: Status 404 returned error can't find the container with id 7009647035bcb9b3d9a9385f910f574abe92ca7bc6f2836a8743b47eb765ed4a Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.482116 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-catalog-content\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.482165 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k72t5\" (UniqueName: \"kubernetes.io/projected/835b2622-9047-4e3a-b019-6f15c5fd4566-kube-api-access-k72t5\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.482204 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-utilities\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.577472 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.582350 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zbfbl"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583357 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583420 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-utilities\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583475 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583521 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583555 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583575 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-catalog-content\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583592 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k72t5\" (UniqueName: \"kubernetes.io/projected/835b2622-9047-4e3a-b019-6f15c5fd4566-kube-api-access-k72t5\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583822 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.584259 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-utilities\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.584874 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-catalog-content\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.586048 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.588479 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.591178 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.591354 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.593511 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.602266 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.603320 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zbfbl"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.609699 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k72t5\" (UniqueName: \"kubernetes.io/projected/835b2622-9047-4e3a-b019-6f15c5fd4566-kube-api-access-k72t5\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.685016 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-utilities\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.685078 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-catalog-content\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.685178 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh6bn\" (UniqueName: \"kubernetes.io/projected/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-kube-api-access-wh6bn\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.689917 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.691940 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.777893 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7n8wh"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.781574 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.787073 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-utilities\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.787113 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-catalog-content\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.787132 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh6bn\" (UniqueName: \"kubernetes.io/projected/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-kube-api-access-wh6bn\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.787712 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-catalog-content\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.787790 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-utilities\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.800118 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7n8wh"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.811758 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh6bn\" (UniqueName: \"kubernetes.io/projected/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-kube-api-access-wh6bn\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.888144 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-utilities\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.888179 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7jhx\" (UniqueName: \"kubernetes.io/projected/f562492e-dbf9-440e-978a-603956fc464e-kube-api-access-r7jhx\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.888224 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-catalog-content\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.889589 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.899086 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:35 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:35 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:35 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.899147 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.900439 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: W0201 07:24:35.942351 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-09b6d4ce2b3030e6207a49ca9214b7fd6cd091887ee3157104072e82fd9a8a10 WatchSource:0}: Error finding container 09b6d4ce2b3030e6207a49ca9214b7fd6cd091887ee3157104072e82fd9a8a10: Status 404 returned error can't find the container with id 09b6d4ce2b3030e6207a49ca9214b7fd6cd091887ee3157104072e82fd9a8a10 Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.955461 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t677t"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.972732 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ng2z7"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.973856 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.989036 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-utilities\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.989097 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7jhx\" (UniqueName: \"kubernetes.io/projected/f562492e-dbf9-440e-978a-603956fc464e-kube-api-access-r7jhx\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.989157 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-catalog-content\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.989828 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-catalog-content\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.990106 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-utilities\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.996054 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ng2z7"] Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.011488 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7jhx\" (UniqueName: \"kubernetes.io/projected/f562492e-dbf9-440e-978a-603956fc464e-kube-api-access-r7jhx\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.090929 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c7w5\" (UniqueName: \"kubernetes.io/projected/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-kube-api-access-5c7w5\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.090969 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-utilities\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.091051 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-catalog-content\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.093266 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" event={"ID":"ac521dca-2154-40bb-bbdb-a22e3d6abd72","Type":"ContainerStarted","Data":"3f33f19419e62411bac7a2082cf36c839014695310e5de008fdbd44a3e0eba81"} Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.093303 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" event={"ID":"ac521dca-2154-40bb-bbdb-a22e3d6abd72","Type":"ContainerStarted","Data":"7009647035bcb9b3d9a9385f910f574abe92ca7bc6f2836a8743b47eb765ed4a"} Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.094146 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.101776 4835 generic.go:334] "Generic (PLEG): container finished" podID="137b200e-5dcd-43c9-82e2-332071d84cb0" containerID="98c793df94b793188e86124f6ff1a8161f18d725c6666c0e72eb3d6113d10246" exitCode=0 Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.101832 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" event={"ID":"137b200e-5dcd-43c9-82e2-332071d84cb0","Type":"ContainerDied","Data":"98c793df94b793188e86124f6ff1a8161f18d725c6666c0e72eb3d6113d10246"} Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.113190 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" podStartSLOduration=124.113169708 podStartE2EDuration="2m4.113169708s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:36.112357476 +0000 UTC m=+149.232793910" watchObservedRunningTime="2026-02-01 07:24:36.113169708 +0000 UTC m=+149.233606142" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.115386 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.119490 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"09b6d4ce2b3030e6207a49ca9214b7fd6cd091887ee3157104072e82fd9a8a10"} Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.120318 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t677t" event={"ID":"835b2622-9047-4e3a-b019-6f15c5fd4566","Type":"ContainerStarted","Data":"8633807aa4c1b4534aedf9236769294f25ed6ac597e2c0fda34cf924f7b62039"} Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.121855 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a3c2f7856c30944518677107649ae5b93411db83b03c28be8ced56e33c6a709e"} Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.121873 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9962c288a34cb7f1e80f9e6be04f9e3b5f0b287b4fdabd815689c3d63c7dfab1"} Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.191954 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-catalog-content\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.192010 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c7w5\" (UniqueName: \"kubernetes.io/projected/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-kube-api-access-5c7w5\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.192038 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-utilities\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.193795 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-catalog-content\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.196609 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-utilities\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.227649 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c7w5\" (UniqueName: \"kubernetes.io/projected/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-kube-api-access-5c7w5\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: W0201 07:24:36.236572 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-043d6986445c5e721ecb76bb07f1db5d9cf8674bd03c2971a930672817bda3c6 WatchSource:0}: Error finding container 043d6986445c5e721ecb76bb07f1db5d9cf8674bd03c2971a930672817bda3c6: Status 404 returned error can't find the container with id 043d6986445c5e721ecb76bb07f1db5d9cf8674bd03c2971a930672817bda3c6 Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.268490 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zbfbl"] Feb 01 07:24:36 crc kubenswrapper[4835]: W0201 07:24:36.298734 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a177b30_3240_49d8_b0c5_b74f8e8f4c7e.slice/crio-34d744c0f2118911ec3770b8a37e279293e3d0075191d345f7ef2f24b56383a6 WatchSource:0}: Error finding container 34d744c0f2118911ec3770b8a37e279293e3d0075191d345f7ef2f24b56383a6: Status 404 returned error can't find the container with id 34d744c0f2118911ec3770b8a37e279293e3d0075191d345f7ef2f24b56383a6 Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.314567 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.420036 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7n8wh"] Feb 01 07:24:36 crc kubenswrapper[4835]: W0201 07:24:36.448756 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf562492e_dbf9_440e_978a_603956fc464e.slice/crio-86a332e6785f7fd31c68a8369c40ba5c5a557e81b2b71995f91b8e3ba6b2e274 WatchSource:0}: Error finding container 86a332e6785f7fd31c68a8369c40ba5c5a557e81b2b71995f91b8e3ba6b2e274: Status 404 returned error can't find the container with id 86a332e6785f7fd31c68a8369c40ba5c5a557e81b2b71995f91b8e3ba6b2e274 Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.734772 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ng2z7"] Feb 01 07:24:36 crc kubenswrapper[4835]: W0201 07:24:36.744384 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3a136e2_3caa_4ed0_960a_6b6a0fdef39e.slice/crio-c56ac053edf9fdbd97a44ab1c01dec3b54c9bd91c581423e5a21d7786e48591e WatchSource:0}: Error finding container c56ac053edf9fdbd97a44ab1c01dec3b54c9bd91c581423e5a21d7786e48591e: Status 404 returned error can't find the container with id c56ac053edf9fdbd97a44ab1c01dec3b54c9bd91c581423e5a21d7786e48591e Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.896655 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:36 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:36 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:36 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.896728 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.081955 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.087160 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.130702 4835 generic.go:334] "Generic (PLEG): container finished" podID="f562492e-dbf9-440e-978a-603956fc464e" containerID="c6c784d52b5c200fbc9c5b7fd427e7a9a01fe58abdfbe2cd4a7fa8dbd1de744a" exitCode=0 Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.130768 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n8wh" event={"ID":"f562492e-dbf9-440e-978a-603956fc464e","Type":"ContainerDied","Data":"c6c784d52b5c200fbc9c5b7fd427e7a9a01fe58abdfbe2cd4a7fa8dbd1de744a"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.130794 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n8wh" event={"ID":"f562492e-dbf9-440e-978a-603956fc464e","Type":"ContainerStarted","Data":"86a332e6785f7fd31c68a8369c40ba5c5a557e81b2b71995f91b8e3ba6b2e274"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.132672 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.133023 4835 generic.go:334] "Generic (PLEG): container finished" podID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerID="a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f" exitCode=0 Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.133110 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ng2z7" event={"ID":"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e","Type":"ContainerDied","Data":"a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.133134 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ng2z7" event={"ID":"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e","Type":"ContainerStarted","Data":"c56ac053edf9fdbd97a44ab1c01dec3b54c9bd91c581423e5a21d7786e48591e"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.136176 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2d2fb313fc51758cd94b0a57062913771b944df11c6ef5c890a7119fbe4f88ac"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.136382 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.137447 4835 generic.go:334] "Generic (PLEG): container finished" podID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerID="eac60a2bcfc7a27f8cce064694d441e59039265b959d26823af533d85c7dcf10" exitCode=0 Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.137496 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbfbl" event={"ID":"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e","Type":"ContainerDied","Data":"eac60a2bcfc7a27f8cce064694d441e59039265b959d26823af533d85c7dcf10"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.137514 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbfbl" event={"ID":"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e","Type":"ContainerStarted","Data":"34d744c0f2118911ec3770b8a37e279293e3d0075191d345f7ef2f24b56383a6"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.139716 4835 generic.go:334] "Generic (PLEG): container finished" podID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerID="7270b81f0145b4123ee2f475f3f90b8aa11e59eef5e948db9ab2c46452e1838a" exitCode=0 Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.139776 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t677t" event={"ID":"835b2622-9047-4e3a-b019-6f15c5fd4566","Type":"ContainerDied","Data":"7270b81f0145b4123ee2f475f3f90b8aa11e59eef5e948db9ab2c46452e1838a"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.141669 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"da61315c3bbb176740f6113e98cef0959c4e498062bf542699f7d7a572634351"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.141697 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"043d6986445c5e721ecb76bb07f1db5d9cf8674bd03c2971a930672817bda3c6"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.452003 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.510565 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/137b200e-5dcd-43c9-82e2-332071d84cb0-config-volume\") pod \"137b200e-5dcd-43c9-82e2-332071d84cb0\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.510624 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/137b200e-5dcd-43c9-82e2-332071d84cb0-secret-volume\") pod \"137b200e-5dcd-43c9-82e2-332071d84cb0\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.510646 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49g4h\" (UniqueName: \"kubernetes.io/projected/137b200e-5dcd-43c9-82e2-332071d84cb0-kube-api-access-49g4h\") pod \"137b200e-5dcd-43c9-82e2-332071d84cb0\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.513019 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/137b200e-5dcd-43c9-82e2-332071d84cb0-config-volume" (OuterVolumeSpecName: "config-volume") pod "137b200e-5dcd-43c9-82e2-332071d84cb0" (UID: "137b200e-5dcd-43c9-82e2-332071d84cb0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.520227 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/137b200e-5dcd-43c9-82e2-332071d84cb0-kube-api-access-49g4h" (OuterVolumeSpecName: "kube-api-access-49g4h") pod "137b200e-5dcd-43c9-82e2-332071d84cb0" (UID: "137b200e-5dcd-43c9-82e2-332071d84cb0"). InnerVolumeSpecName "kube-api-access-49g4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.520493 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b200e-5dcd-43c9-82e2-332071d84cb0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "137b200e-5dcd-43c9-82e2-332071d84cb0" (UID: "137b200e-5dcd-43c9-82e2-332071d84cb0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.572734 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4xx49"] Feb 01 07:24:37 crc kubenswrapper[4835]: E0201 07:24:37.572936 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137b200e-5dcd-43c9-82e2-332071d84cb0" containerName="collect-profiles" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.572960 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="137b200e-5dcd-43c9-82e2-332071d84cb0" containerName="collect-profiles" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.573078 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="137b200e-5dcd-43c9-82e2-332071d84cb0" containerName="collect-profiles" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.573797 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.576326 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.584667 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xx49"] Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.615813 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-catalog-content\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.615891 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-utilities\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.615952 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcvsr\" (UniqueName: \"kubernetes.io/projected/602186bd-e71a-4ce1-ad39-c56495e815c3-kube-api-access-fcvsr\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.616171 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/137b200e-5dcd-43c9-82e2-332071d84cb0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.616188 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/137b200e-5dcd-43c9-82e2-332071d84cb0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.616198 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49g4h\" (UniqueName: \"kubernetes.io/projected/137b200e-5dcd-43c9-82e2-332071d84cb0-kube-api-access-49g4h\") on node \"crc\" DevicePath \"\"" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.716857 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-utilities\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.716937 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcvsr\" (UniqueName: \"kubernetes.io/projected/602186bd-e71a-4ce1-ad39-c56495e815c3-kube-api-access-fcvsr\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.716977 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-catalog-content\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.717379 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-utilities\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.717436 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-catalog-content\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.737576 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcvsr\" (UniqueName: \"kubernetes.io/projected/602186bd-e71a-4ce1-ad39-c56495e815c3-kube-api-access-fcvsr\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.887164 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.897224 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:37 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:37 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:37 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.897320 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.973304 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tlf77"] Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.974678 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.990374 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlf77"] Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.020763 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-catalog-content\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.020811 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blpp8\" (UniqueName: \"kubernetes.io/projected/9b287031-510c-410c-ade6-c2cf7a48e363-kube-api-access-blpp8\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.020989 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-utilities\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.123454 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-catalog-content\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.123787 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blpp8\" (UniqueName: \"kubernetes.io/projected/9b287031-510c-410c-ade6-c2cf7a48e363-kube-api-access-blpp8\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.123842 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-utilities\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.123984 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-catalog-content\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.124193 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-utilities\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.125203 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xx49"] Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.144983 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blpp8\" (UniqueName: \"kubernetes.io/projected/9b287031-510c-410c-ade6-c2cf7a48e363-kube-api-access-blpp8\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.151402 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" event={"ID":"137b200e-5dcd-43c9-82e2-332071d84cb0","Type":"ContainerDied","Data":"42603d073e4ca627863068ad7515b8856291ec8428ed3ebba7f5fa565c3a76d5"} Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.151457 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42603d073e4ca627863068ad7515b8856291ec8428ed3ebba7f5fa565c3a76d5" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.151518 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.165100 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.165155 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.178331 4835 patch_prober.go:28] interesting pod/console-f9d7485db-8hgqx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.179003 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8hgqx" podUID="9154a093-1841-44f5-a71d-e42f5c19dfba" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.318349 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.578979 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s7hk7"] Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.580478 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.583946 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.590838 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7hk7"] Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.629327 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-catalog-content\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.629370 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-utilities\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.629403 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97wl9\" (UniqueName: \"kubernetes.io/projected/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-kube-api-access-97wl9\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.730400 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-catalog-content\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.730677 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-utilities\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.730708 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97wl9\" (UniqueName: \"kubernetes.io/projected/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-kube-api-access-97wl9\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.731488 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-catalog-content\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.731720 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-utilities\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.751829 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97wl9\" (UniqueName: \"kubernetes.io/projected/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-kube-api-access-97wl9\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.772142 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlf77"] Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.893128 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.900040 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.907167 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.973039 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k5smh"] Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.974024 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.979317 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k5smh"] Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.036343 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bggh7\" (UniqueName: \"kubernetes.io/projected/cc8c2486-a383-48cb-aefe-1610bc1c534f-kube-api-access-bggh7\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.036394 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-catalog-content\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.036430 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-utilities\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.138065 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-catalog-content\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.138347 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-utilities\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.138513 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bggh7\" (UniqueName: \"kubernetes.io/projected/cc8c2486-a383-48cb-aefe-1610bc1c534f-kube-api-access-bggh7\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.139094 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-catalog-content\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.140227 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-utilities\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.149708 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7hk7"] Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.159365 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bggh7\" (UniqueName: \"kubernetes.io/projected/cc8c2486-a383-48cb-aefe-1610bc1c534f-kube-api-access-bggh7\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.164057 4835 generic.go:334] "Generic (PLEG): container finished" podID="9b287031-510c-410c-ade6-c2cf7a48e363" containerID="3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f" exitCode=0 Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.164147 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlf77" event={"ID":"9b287031-510c-410c-ade6-c2cf7a48e363","Type":"ContainerDied","Data":"3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f"} Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.164188 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlf77" event={"ID":"9b287031-510c-410c-ade6-c2cf7a48e363","Type":"ContainerStarted","Data":"50bb18dda4afd99c54bbc442fbcd2bb9c50ee2eb6dac4877186bb6aa56a4b49b"} Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.170183 4835 generic.go:334] "Generic (PLEG): container finished" podID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerID="b14cf051de6ab1294efac8b8b8e42b820cf594040b129fc04b183d93a8efbf57" exitCode=0 Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.170904 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xx49" event={"ID":"602186bd-e71a-4ce1-ad39-c56495e815c3","Type":"ContainerDied","Data":"b14cf051de6ab1294efac8b8b8e42b820cf594040b129fc04b183d93a8efbf57"} Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.171166 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xx49" event={"ID":"602186bd-e71a-4ce1-ad39-c56495e815c3","Type":"ContainerStarted","Data":"dea430e052099dd47c2c324f9a18af947b95755e422272ec8bbff41882bef5e5"} Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.174766 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.176088 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.176969 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.180317 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.182626 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.191674 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.239499 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.239610 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.264337 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.342046 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.342146 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.342170 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.362692 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.371782 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.435206 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8v8n container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.435234 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8v8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.435272 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k8v8n" podUID="79c369eb-e17d-4a32-9167-934aa23fd4fc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.435288 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k8v8n" podUID="79c369eb-e17d-4a32-9167-934aa23fd4fc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.519528 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.600011 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.767918 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k5smh"] Feb 01 07:24:39 crc kubenswrapper[4835]: W0201 07:24:39.822163 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc8c2486_a383_48cb_aefe_1610bc1c534f.slice/crio-60136c9d9c1fa01ab239559ff4cf41446038fd0cd99c254158238a21609db4a7 WatchSource:0}: Error finding container 60136c9d9c1fa01ab239559ff4cf41446038fd0cd99c254158238a21609db4a7: Status 404 returned error can't find the container with id 60136c9d9c1fa01ab239559ff4cf41446038fd0cd99c254158238a21609db4a7 Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.873697 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.996290 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.998265 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.001674 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.001909 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.002120 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.056107 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e745045-e905-4988-b768-a0eac1b93996-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.056184 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e745045-e905-4988-b768-a0eac1b93996-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.157349 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e745045-e905-4988-b768-a0eac1b93996-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.157507 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e745045-e905-4988-b768-a0eac1b93996-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.157617 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e745045-e905-4988-b768-a0eac1b93996-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.191016 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e745045-e905-4988-b768-a0eac1b93996-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.195172 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d271f6d-4f2f-40e8-a928-4a88a2439f17","Type":"ContainerStarted","Data":"9db8075d36839f752f941857c0522159a35de95a213d37df8437d5339245dd74"} Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.214431 4835 generic.go:334] "Generic (PLEG): container finished" podID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerID="d5974ea84742510757e055f310d0049c446f1e2fe023968cfe1b5034d72af99c" exitCode=0 Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.215670 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7hk7" event={"ID":"2e2bb332-ae2b-4ef7-90b2-79928bf7407b","Type":"ContainerDied","Data":"d5974ea84742510757e055f310d0049c446f1e2fe023968cfe1b5034d72af99c"} Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.215735 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7hk7" event={"ID":"2e2bb332-ae2b-4ef7-90b2-79928bf7407b","Type":"ContainerStarted","Data":"46b5cafa1f07b5021e9e78fc5e6be54cf12c37d6cc9f28c581409330362b0959"} Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.223287 4835 generic.go:334] "Generic (PLEG): container finished" podID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerID="22e3a2a64402097b404fc7d0b7e471cb7339456b1827cdc5eeb1a1b4417b2cf4" exitCode=0 Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.223418 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5smh" event={"ID":"cc8c2486-a383-48cb-aefe-1610bc1c534f","Type":"ContainerDied","Data":"22e3a2a64402097b404fc7d0b7e471cb7339456b1827cdc5eeb1a1b4417b2cf4"} Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.223478 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5smh" event={"ID":"cc8c2486-a383-48cb-aefe-1610bc1c534f","Type":"ContainerStarted","Data":"60136c9d9c1fa01ab239559ff4cf41446038fd0cd99c254158238a21609db4a7"} Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.373675 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.955351 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 01 07:24:40 crc kubenswrapper[4835]: W0201 07:24:40.967801 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5e745045_e905_4988_b768_a0eac1b93996.slice/crio-ae43453708d322d90bc452d6587e1ac503e1cff1a7f8f218c826f87873757b99 WatchSource:0}: Error finding container ae43453708d322d90bc452d6587e1ac503e1cff1a7f8f218c826f87873757b99: Status 404 returned error can't find the container with id ae43453708d322d90bc452d6587e1ac503e1cff1a7f8f218c826f87873757b99 Feb 01 07:24:41 crc kubenswrapper[4835]: I0201 07:24:41.319205 4835 generic.go:334] "Generic (PLEG): container finished" podID="2d271f6d-4f2f-40e8-a928-4a88a2439f17" containerID="1e6fbac6dced342cacb3472feaa24aa426ebcb958226b83d5ee2d270b6503b08" exitCode=0 Feb 01 07:24:41 crc kubenswrapper[4835]: I0201 07:24:41.319307 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d271f6d-4f2f-40e8-a928-4a88a2439f17","Type":"ContainerDied","Data":"1e6fbac6dced342cacb3472feaa24aa426ebcb958226b83d5ee2d270b6503b08"} Feb 01 07:24:41 crc kubenswrapper[4835]: I0201 07:24:41.324846 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e745045-e905-4988-b768-a0eac1b93996","Type":"ContainerStarted","Data":"ae43453708d322d90bc452d6587e1ac503e1cff1a7f8f218c826f87873757b99"} Feb 01 07:24:41 crc kubenswrapper[4835]: I0201 07:24:41.635719 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:42 crc kubenswrapper[4835]: I0201 07:24:42.340610 4835 generic.go:334] "Generic (PLEG): container finished" podID="5e745045-e905-4988-b768-a0eac1b93996" containerID="679b5df1a39c891464657a42281a4ead0a7d17b93b75b99d7f25af9269ddb1fc" exitCode=0 Feb 01 07:24:42 crc kubenswrapper[4835]: I0201 07:24:42.340825 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e745045-e905-4988-b768-a0eac1b93996","Type":"ContainerDied","Data":"679b5df1a39c891464657a42281a4ead0a7d17b93b75b99d7f25af9269ddb1fc"} Feb 01 07:24:48 crc kubenswrapper[4835]: I0201 07:24:48.181671 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:48 crc kubenswrapper[4835]: I0201 07:24:48.187347 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.255376 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.258479 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.311514 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e745045-e905-4988-b768-a0eac1b93996-kube-api-access\") pod \"5e745045-e905-4988-b768-a0eac1b93996\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.311581 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kube-api-access\") pod \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.311605 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kubelet-dir\") pod \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.311641 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e745045-e905-4988-b768-a0eac1b93996-kubelet-dir\") pod \"5e745045-e905-4988-b768-a0eac1b93996\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.311925 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e745045-e905-4988-b768-a0eac1b93996-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5e745045-e905-4988-b768-a0eac1b93996" (UID: "5e745045-e905-4988-b768-a0eac1b93996"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.312145 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2d271f6d-4f2f-40e8-a928-4a88a2439f17" (UID: "2d271f6d-4f2f-40e8-a928-4a88a2439f17"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.317377 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2d271f6d-4f2f-40e8-a928-4a88a2439f17" (UID: "2d271f6d-4f2f-40e8-a928-4a88a2439f17"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.319023 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e745045-e905-4988-b768-a0eac1b93996-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5e745045-e905-4988-b768-a0eac1b93996" (UID: "5e745045-e905-4988-b768-a0eac1b93996"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.392143 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d271f6d-4f2f-40e8-a928-4a88a2439f17","Type":"ContainerDied","Data":"9db8075d36839f752f941857c0522159a35de95a213d37df8437d5339245dd74"} Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.392178 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9db8075d36839f752f941857c0522159a35de95a213d37df8437d5339245dd74" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.392226 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.395567 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e745045-e905-4988-b768-a0eac1b93996","Type":"ContainerDied","Data":"ae43453708d322d90bc452d6587e1ac503e1cff1a7f8f218c826f87873757b99"} Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.395608 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae43453708d322d90bc452d6587e1ac503e1cff1a7f8f218c826f87873757b99" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.395668 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.416894 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e745045-e905-4988-b768-a0eac1b93996-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.416920 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.416930 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.416940 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e745045-e905-4988-b768-a0eac1b93996-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.453600 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-k8v8n" Feb 01 07:24:55 crc kubenswrapper[4835]: I0201 07:24:55.181183 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:55 crc kubenswrapper[4835]: I0201 07:24:55.191831 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:24:55 crc kubenswrapper[4835]: I0201 07:24:55.191935 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:24:55 crc kubenswrapper[4835]: I0201 07:24:55.713811 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:55 crc kubenswrapper[4835]: I0201 07:24:55.837247 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:56 crc kubenswrapper[4835]: I0201 07:24:56.103544 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:25:06 crc kubenswrapper[4835]: E0201 07:25:06.077754 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 01 07:25:06 crc kubenswrapper[4835]: E0201 07:25:06.078698 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k72t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-t677t_openshift-marketplace(835b2622-9047-4e3a-b019-6f15c5fd4566): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 07:25:06 crc kubenswrapper[4835]: E0201 07:25:06.079994 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-t677t" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" Feb 01 07:25:09 crc kubenswrapper[4835]: E0201 07:25:09.115723 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-t677t" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" Feb 01 07:25:09 crc kubenswrapper[4835]: E0201 07:25:09.195245 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 01 07:25:09 crc kubenswrapper[4835]: E0201 07:25:09.195853 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c7w5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ng2z7_openshift-marketplace(e3a136e2-3caa-4ed0-960a-6b6a0fdef39e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 07:25:09 crc kubenswrapper[4835]: E0201 07:25:09.197300 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ng2z7" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" Feb 01 07:25:09 crc kubenswrapper[4835]: E0201 07:25:09.242202 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 01 07:25:09 crc kubenswrapper[4835]: E0201 07:25:09.242383 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh6bn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zbfbl_openshift-marketplace(7a177b30-3240-49d8-b0c5-b74f8e8f4c7e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 07:25:09 crc kubenswrapper[4835]: E0201 07:25:09.243615 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-zbfbl" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" Feb 01 07:25:09 crc kubenswrapper[4835]: I0201 07:25:09.575970 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.526455 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zbfbl" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.526534 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ng2z7" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.623897 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.624250 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcvsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4xx49_openshift-marketplace(602186bd-e71a-4ce1-ad39-c56495e815c3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.625680 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4xx49" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.644727 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.645011 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blpp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tlf77_openshift-marketplace(9b287031-510c-410c-ade6-c2cf7a48e363): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.646519 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tlf77" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" Feb 01 07:25:10 crc kubenswrapper[4835]: I0201 07:25:10.980504 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2msm5"] Feb 01 07:25:10 crc kubenswrapper[4835]: W0201 07:25:10.991570 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcaf346fd_1c47_4f35_a5e6_79f7ac8fcafe.slice/crio-976a92bf8966e490f7cd8f9dc0d4b383083d4002d7be8d0e9e2bbb527c97e57e WatchSource:0}: Error finding container 976a92bf8966e490f7cd8f9dc0d4b383083d4002d7be8d0e9e2bbb527c97e57e: Status 404 returned error can't find the container with id 976a92bf8966e490f7cd8f9dc0d4b383083d4002d7be8d0e9e2bbb527c97e57e Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.557488 4835 generic.go:334] "Generic (PLEG): container finished" podID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerID="deccbf5bf47273db8305d287368e84a9555304937b617c52aaad45a3c56162a2" exitCode=0 Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.557722 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7hk7" event={"ID":"2e2bb332-ae2b-4ef7-90b2-79928bf7407b","Type":"ContainerDied","Data":"deccbf5bf47273db8305d287368e84a9555304937b617c52aaad45a3c56162a2"} Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.564994 4835 generic.go:334] "Generic (PLEG): container finished" podID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerID="7dfe92877369cb97f3ec7447941cb4bb3ac1fbbf67088e96c4fce3815dd8e8dc" exitCode=0 Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.565054 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5smh" event={"ID":"cc8c2486-a383-48cb-aefe-1610bc1c534f","Type":"ContainerDied","Data":"7dfe92877369cb97f3ec7447941cb4bb3ac1fbbf67088e96c4fce3815dd8e8dc"} Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.572216 4835 generic.go:334] "Generic (PLEG): container finished" podID="f562492e-dbf9-440e-978a-603956fc464e" containerID="25fdb854cbe1bf7efd7e7f32850a0d48ca8d03934de27955c9c0311a3869e9eb" exitCode=0 Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.572288 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n8wh" event={"ID":"f562492e-dbf9-440e-978a-603956fc464e","Type":"ContainerDied","Data":"25fdb854cbe1bf7efd7e7f32850a0d48ca8d03934de27955c9c0311a3869e9eb"} Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.580776 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2msm5" event={"ID":"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe","Type":"ContainerStarted","Data":"eb60a4d27a4cc1b5f3c82da369f83ac327916bf671ccae285fd0fb45c373ecf6"} Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.580820 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2msm5" event={"ID":"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe","Type":"ContainerStarted","Data":"165e283bd22cdccea9ad0b40eddea97692652dff08663bcc215451372333ccca"} Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.580833 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2msm5" event={"ID":"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe","Type":"ContainerStarted","Data":"976a92bf8966e490f7cd8f9dc0d4b383083d4002d7be8d0e9e2bbb527c97e57e"} Feb 01 07:25:11 crc kubenswrapper[4835]: E0201 07:25:11.582697 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4xx49" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" Feb 01 07:25:11 crc kubenswrapper[4835]: E0201 07:25:11.582923 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-tlf77" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.647319 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2msm5" podStartSLOduration=159.647301693 podStartE2EDuration="2m39.647301693s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:25:11.645730442 +0000 UTC m=+184.766166906" watchObservedRunningTime="2026-02-01 07:25:11.647301693 +0000 UTC m=+184.767738127" Feb 01 07:25:12 crc kubenswrapper[4835]: I0201 07:25:12.596941 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n8wh" event={"ID":"f562492e-dbf9-440e-978a-603956fc464e","Type":"ContainerStarted","Data":"5c8d88d803cbf808d4f6e7bbccdd22422fa76272b787ff433136b59f5dde80fe"} Feb 01 07:25:12 crc kubenswrapper[4835]: I0201 07:25:12.600969 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7hk7" event={"ID":"2e2bb332-ae2b-4ef7-90b2-79928bf7407b","Type":"ContainerStarted","Data":"9cd63e168f5ee1bba32762ea60b5535c14b22b6a31b98e3419ead8dd99d4331a"} Feb 01 07:25:12 crc kubenswrapper[4835]: I0201 07:25:12.604192 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5smh" event={"ID":"cc8c2486-a383-48cb-aefe-1610bc1c534f","Type":"ContainerStarted","Data":"c6d524ddca405a0b23f12afddf880a49b965b141dbf1843686ebe4bac83255ff"} Feb 01 07:25:12 crc kubenswrapper[4835]: I0201 07:25:12.620728 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7n8wh" podStartSLOduration=2.541320089 podStartE2EDuration="37.620706396s" podCreationTimestamp="2026-02-01 07:24:35 +0000 UTC" firstStartedPulling="2026-02-01 07:24:37.132177712 +0000 UTC m=+150.252614176" lastFinishedPulling="2026-02-01 07:25:12.211563999 +0000 UTC m=+185.332000483" observedRunningTime="2026-02-01 07:25:12.61860175 +0000 UTC m=+185.739038184" watchObservedRunningTime="2026-02-01 07:25:12.620706396 +0000 UTC m=+185.741142850" Feb 01 07:25:12 crc kubenswrapper[4835]: I0201 07:25:12.635728 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k5smh" podStartSLOduration=2.8461160039999998 podStartE2EDuration="34.635712561s" podCreationTimestamp="2026-02-01 07:24:38 +0000 UTC" firstStartedPulling="2026-02-01 07:24:40.227154716 +0000 UTC m=+153.347591150" lastFinishedPulling="2026-02-01 07:25:12.016751283 +0000 UTC m=+185.137187707" observedRunningTime="2026-02-01 07:25:12.633607336 +0000 UTC m=+185.754043780" watchObservedRunningTime="2026-02-01 07:25:12.635712561 +0000 UTC m=+185.756148995" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.587531 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s7hk7" podStartSLOduration=4.858576488 podStartE2EDuration="36.587509007s" podCreationTimestamp="2026-02-01 07:24:38 +0000 UTC" firstStartedPulling="2026-02-01 07:24:40.217554482 +0000 UTC m=+153.337990916" lastFinishedPulling="2026-02-01 07:25:11.946487011 +0000 UTC m=+185.066923435" observedRunningTime="2026-02-01 07:25:12.655129033 +0000 UTC m=+185.775565467" watchObservedRunningTime="2026-02-01 07:25:14.587509007 +0000 UTC m=+187.707945441" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.589636 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 01 07:25:14 crc kubenswrapper[4835]: E0201 07:25:14.589886 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e745045-e905-4988-b768-a0eac1b93996" containerName="pruner" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.589899 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e745045-e905-4988-b768-a0eac1b93996" containerName="pruner" Feb 01 07:25:14 crc kubenswrapper[4835]: E0201 07:25:14.589919 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d271f6d-4f2f-40e8-a928-4a88a2439f17" containerName="pruner" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.589928 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d271f6d-4f2f-40e8-a928-4a88a2439f17" containerName="pruner" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.590044 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e745045-e905-4988-b768-a0eac1b93996" containerName="pruner" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.590061 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d271f6d-4f2f-40e8-a928-4a88a2439f17" containerName="pruner" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.590481 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.594267 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.595137 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.598864 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.719660 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6187f01f-46de-413a-92cc-bc0f1375d41d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.720600 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6187f01f-46de-413a-92cc-bc0f1375d41d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.821861 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6187f01f-46de-413a-92cc-bc0f1375d41d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.821959 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6187f01f-46de-413a-92cc-bc0f1375d41d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.822027 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6187f01f-46de-413a-92cc-bc0f1375d41d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.841658 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6187f01f-46de-413a-92cc-bc0f1375d41d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.922440 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:15 crc kubenswrapper[4835]: I0201 07:25:15.307349 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 01 07:25:15 crc kubenswrapper[4835]: I0201 07:25:15.624670 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6187f01f-46de-413a-92cc-bc0f1375d41d","Type":"ContainerStarted","Data":"c31d81fc5c0cd09625c38c8ab4a7ca33f56c0b53ab94891b3e9f920ec9a6b7b8"} Feb 01 07:25:15 crc kubenswrapper[4835]: I0201 07:25:15.705689 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:25:16 crc kubenswrapper[4835]: I0201 07:25:16.115995 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:25:16 crc kubenswrapper[4835]: I0201 07:25:16.116033 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:25:16 crc kubenswrapper[4835]: I0201 07:25:16.271185 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:25:16 crc kubenswrapper[4835]: I0201 07:25:16.632567 4835 generic.go:334] "Generic (PLEG): container finished" podID="6187f01f-46de-413a-92cc-bc0f1375d41d" containerID="685e3c665dd59964bd38d414001e963309cce90e76dab346d0a1e34c8e97e399" exitCode=0 Feb 01 07:25:16 crc kubenswrapper[4835]: I0201 07:25:16.632620 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6187f01f-46de-413a-92cc-bc0f1375d41d","Type":"ContainerDied","Data":"685e3c665dd59964bd38d414001e963309cce90e76dab346d0a1e34c8e97e399"} Feb 01 07:25:17 crc kubenswrapper[4835]: I0201 07:25:17.552170 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tkff4"] Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.002044 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.161768 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6187f01f-46de-413a-92cc-bc0f1375d41d-kubelet-dir\") pod \"6187f01f-46de-413a-92cc-bc0f1375d41d\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.161830 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6187f01f-46de-413a-92cc-bc0f1375d41d-kube-api-access\") pod \"6187f01f-46de-413a-92cc-bc0f1375d41d\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.161884 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6187f01f-46de-413a-92cc-bc0f1375d41d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6187f01f-46de-413a-92cc-bc0f1375d41d" (UID: "6187f01f-46de-413a-92cc-bc0f1375d41d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.162061 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6187f01f-46de-413a-92cc-bc0f1375d41d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.169552 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6187f01f-46de-413a-92cc-bc0f1375d41d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6187f01f-46de-413a-92cc-bc0f1375d41d" (UID: "6187f01f-46de-413a-92cc-bc0f1375d41d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.263553 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6187f01f-46de-413a-92cc-bc0f1375d41d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.646592 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6187f01f-46de-413a-92cc-bc0f1375d41d","Type":"ContainerDied","Data":"c31d81fc5c0cd09625c38c8ab4a7ca33f56c0b53ab94891b3e9f920ec9a6b7b8"} Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.646648 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c31d81fc5c0cd09625c38c8ab4a7ca33f56c0b53ab94891b3e9f920ec9a6b7b8" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.646678 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.907578 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.907779 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:25:19 crc kubenswrapper[4835]: I0201 07:25:19.372939 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:25:19 crc kubenswrapper[4835]: I0201 07:25:19.372993 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:25:19 crc kubenswrapper[4835]: I0201 07:25:19.424037 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:25:19 crc kubenswrapper[4835]: I0201 07:25:19.696132 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:25:19 crc kubenswrapper[4835]: I0201 07:25:19.735702 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k5smh"] Feb 01 07:25:19 crc kubenswrapper[4835]: I0201 07:25:19.968694 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s7hk7" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="registry-server" probeResult="failure" output=< Feb 01 07:25:19 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 01 07:25:19 crc kubenswrapper[4835]: > Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.384182 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 01 07:25:21 crc kubenswrapper[4835]: E0201 07:25:21.384814 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6187f01f-46de-413a-92cc-bc0f1375d41d" containerName="pruner" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.384841 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6187f01f-46de-413a-92cc-bc0f1375d41d" containerName="pruner" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.384932 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6187f01f-46de-413a-92cc-bc0f1375d41d" containerName="pruner" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.385398 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.387868 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.388147 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.389077 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.513501 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-var-lock\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.513550 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.513748 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kube-api-access\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.614811 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kube-api-access\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.614873 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-var-lock\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.614896 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.614951 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.615018 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-var-lock\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.640324 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kube-api-access\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.660890 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k5smh" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="registry-server" containerID="cri-o://c6d524ddca405a0b23f12afddf880a49b965b141dbf1843686ebe4bac83255ff" gracePeriod=2 Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.709075 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:22 crc kubenswrapper[4835]: I0201 07:25:22.115848 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 01 07:25:22 crc kubenswrapper[4835]: I0201 07:25:22.666558 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c9b454b8-f758-43d4-bd2b-93ebc807e06e","Type":"ContainerStarted","Data":"2997edb8ab02ca7e2da0f4120bdf3140a5e44974f1c0d9270cf560bcceec34c4"} Feb 01 07:25:22 crc kubenswrapper[4835]: I0201 07:25:22.669022 4835 generic.go:334] "Generic (PLEG): container finished" podID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerID="c6d524ddca405a0b23f12afddf880a49b965b141dbf1843686ebe4bac83255ff" exitCode=0 Feb 01 07:25:22 crc kubenswrapper[4835]: I0201 07:25:22.669076 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5smh" event={"ID":"cc8c2486-a383-48cb-aefe-1610bc1c534f","Type":"ContainerDied","Data":"c6d524ddca405a0b23f12afddf880a49b965b141dbf1843686ebe4bac83255ff"} Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.391570 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.544080 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-utilities\") pod \"cc8c2486-a383-48cb-aefe-1610bc1c534f\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.544154 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bggh7\" (UniqueName: \"kubernetes.io/projected/cc8c2486-a383-48cb-aefe-1610bc1c534f-kube-api-access-bggh7\") pod \"cc8c2486-a383-48cb-aefe-1610bc1c534f\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.544176 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-catalog-content\") pod \"cc8c2486-a383-48cb-aefe-1610bc1c534f\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.544899 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-utilities" (OuterVolumeSpecName: "utilities") pod "cc8c2486-a383-48cb-aefe-1610bc1c534f" (UID: "cc8c2486-a383-48cb-aefe-1610bc1c534f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.548440 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc8c2486-a383-48cb-aefe-1610bc1c534f-kube-api-access-bggh7" (OuterVolumeSpecName: "kube-api-access-bggh7") pod "cc8c2486-a383-48cb-aefe-1610bc1c534f" (UID: "cc8c2486-a383-48cb-aefe-1610bc1c534f"). InnerVolumeSpecName "kube-api-access-bggh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.645660 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bggh7\" (UniqueName: \"kubernetes.io/projected/cc8c2486-a383-48cb-aefe-1610bc1c534f-kube-api-access-bggh7\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.645984 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.675991 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5smh" event={"ID":"cc8c2486-a383-48cb-aefe-1610bc1c534f","Type":"ContainerDied","Data":"60136c9d9c1fa01ab239559ff4cf41446038fd0cd99c254158238a21609db4a7"} Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.676026 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.676043 4835 scope.go:117] "RemoveContainer" containerID="c6d524ddca405a0b23f12afddf880a49b965b141dbf1843686ebe4bac83255ff" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.691671 4835 scope.go:117] "RemoveContainer" containerID="7dfe92877369cb97f3ec7447941cb4bb3ac1fbbf67088e96c4fce3815dd8e8dc" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.711431 4835 scope.go:117] "RemoveContainer" containerID="22e3a2a64402097b404fc7d0b7e471cb7339456b1827cdc5eeb1a1b4417b2cf4" Feb 01 07:25:24 crc kubenswrapper[4835]: I0201 07:25:24.681882 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c9b454b8-f758-43d4-bd2b-93ebc807e06e","Type":"ContainerStarted","Data":"3a62854f07efe9ee61bbc8b6cf4f08d0ff0e9a200d15a47492c6bdf618532148"} Feb 01 07:25:24 crc kubenswrapper[4835]: I0201 07:25:24.696250 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.69622913 podStartE2EDuration="3.69622913s" podCreationTimestamp="2026-02-01 07:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:25:24.694554781 +0000 UTC m=+197.814991215" watchObservedRunningTime="2026-02-01 07:25:24.69622913 +0000 UTC m=+197.816665574" Feb 01 07:25:25 crc kubenswrapper[4835]: I0201 07:25:25.191577 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:25:25 crc kubenswrapper[4835]: I0201 07:25:25.191667 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:25:25 crc kubenswrapper[4835]: I0201 07:25:25.761298 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc8c2486-a383-48cb-aefe-1610bc1c534f" (UID: "cc8c2486-a383-48cb-aefe-1610bc1c534f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:25 crc kubenswrapper[4835]: I0201 07:25:25.772488 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:25 crc kubenswrapper[4835]: I0201 07:25:25.804690 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k5smh"] Feb 01 07:25:25 crc kubenswrapper[4835]: I0201 07:25:25.807613 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k5smh"] Feb 01 07:25:26 crc kubenswrapper[4835]: I0201 07:25:26.155371 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:25:27 crc kubenswrapper[4835]: I0201 07:25:27.572303 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" path="/var/lib/kubelet/pods/cc8c2486-a383-48cb-aefe-1610bc1c534f/volumes" Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.462229 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7n8wh"] Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.462900 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7n8wh" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="registry-server" containerID="cri-o://5c8d88d803cbf808d4f6e7bbccdd22422fa76272b787ff433136b59f5dde80fe" gracePeriod=2 Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.702818 4835 generic.go:334] "Generic (PLEG): container finished" podID="f562492e-dbf9-440e-978a-603956fc464e" containerID="5c8d88d803cbf808d4f6e7bbccdd22422fa76272b787ff433136b59f5dde80fe" exitCode=0 Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.702902 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n8wh" event={"ID":"f562492e-dbf9-440e-978a-603956fc464e","Type":"ContainerDied","Data":"5c8d88d803cbf808d4f6e7bbccdd22422fa76272b787ff433136b59f5dde80fe"} Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.703941 4835 generic.go:334] "Generic (PLEG): container finished" podID="9b287031-510c-410c-ade6-c2cf7a48e363" containerID="5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd" exitCode=0 Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.703984 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlf77" event={"ID":"9b287031-510c-410c-ade6-c2cf7a48e363","Type":"ContainerDied","Data":"5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd"} Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.707216 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ng2z7" event={"ID":"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e","Type":"ContainerStarted","Data":"4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b"} Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.711127 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbfbl" event={"ID":"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e","Type":"ContainerStarted","Data":"7fde970c7809bb8c50b149f97b8907cd34e5ed3f92e53b3f48046bec959d09ef"} Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.712436 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t677t" event={"ID":"835b2622-9047-4e3a-b019-6f15c5fd4566","Type":"ContainerStarted","Data":"1b7f8d984d304fa16176f9ff67b5f5c30b1244ad6e8dd4e1ef20f9098a0f7fe2"} Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.956497 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.993708 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.015818 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.112845 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7jhx\" (UniqueName: \"kubernetes.io/projected/f562492e-dbf9-440e-978a-603956fc464e-kube-api-access-r7jhx\") pod \"f562492e-dbf9-440e-978a-603956fc464e\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.112953 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-utilities\") pod \"f562492e-dbf9-440e-978a-603956fc464e\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.112996 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-catalog-content\") pod \"f562492e-dbf9-440e-978a-603956fc464e\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.113773 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-utilities" (OuterVolumeSpecName: "utilities") pod "f562492e-dbf9-440e-978a-603956fc464e" (UID: "f562492e-dbf9-440e-978a-603956fc464e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.118123 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f562492e-dbf9-440e-978a-603956fc464e-kube-api-access-r7jhx" (OuterVolumeSpecName: "kube-api-access-r7jhx") pod "f562492e-dbf9-440e-978a-603956fc464e" (UID: "f562492e-dbf9-440e-978a-603956fc464e"). InnerVolumeSpecName "kube-api-access-r7jhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.210148 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f562492e-dbf9-440e-978a-603956fc464e" (UID: "f562492e-dbf9-440e-978a-603956fc464e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.213889 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7jhx\" (UniqueName: \"kubernetes.io/projected/f562492e-dbf9-440e-978a-603956fc464e-kube-api-access-r7jhx\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.213919 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.213930 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:29 crc kubenswrapper[4835]: E0201 07:25:29.286465 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod602186bd_e71a_4ce1_ad39_c56495e815c3.slice/crio-b1f0e4a7c799308902bb8e0217a0c30fdd02e1a32fd2564302d2a528cea8ba75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod602186bd_e71a_4ce1_ad39_c56495e815c3.slice/crio-conmon-b1f0e4a7c799308902bb8e0217a0c30fdd02e1a32fd2564302d2a528cea8ba75.scope\": RecentStats: unable to find data in memory cache]" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.720386 4835 generic.go:334] "Generic (PLEG): container finished" podID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerID="7fde970c7809bb8c50b149f97b8907cd34e5ed3f92e53b3f48046bec959d09ef" exitCode=0 Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.720479 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbfbl" event={"ID":"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e","Type":"ContainerDied","Data":"7fde970c7809bb8c50b149f97b8907cd34e5ed3f92e53b3f48046bec959d09ef"} Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.729802 4835 generic.go:334] "Generic (PLEG): container finished" podID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerID="b1f0e4a7c799308902bb8e0217a0c30fdd02e1a32fd2564302d2a528cea8ba75" exitCode=0 Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.729833 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xx49" event={"ID":"602186bd-e71a-4ce1-ad39-c56495e815c3","Type":"ContainerDied","Data":"b1f0e4a7c799308902bb8e0217a0c30fdd02e1a32fd2564302d2a528cea8ba75"} Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.732107 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t677t" event={"ID":"835b2622-9047-4e3a-b019-6f15c5fd4566","Type":"ContainerDied","Data":"1b7f8d984d304fa16176f9ff67b5f5c30b1244ad6e8dd4e1ef20f9098a0f7fe2"} Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.732109 4835 generic.go:334] "Generic (PLEG): container finished" podID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerID="1b7f8d984d304fa16176f9ff67b5f5c30b1244ad6e8dd4e1ef20f9098a0f7fe2" exitCode=0 Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.735321 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n8wh" event={"ID":"f562492e-dbf9-440e-978a-603956fc464e","Type":"ContainerDied","Data":"86a332e6785f7fd31c68a8369c40ba5c5a557e81b2b71995f91b8e3ba6b2e274"} Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.735363 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.735373 4835 scope.go:117] "RemoveContainer" containerID="5c8d88d803cbf808d4f6e7bbccdd22422fa76272b787ff433136b59f5dde80fe" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.740326 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlf77" event={"ID":"9b287031-510c-410c-ade6-c2cf7a48e363","Type":"ContainerStarted","Data":"88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33"} Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.742634 4835 generic.go:334] "Generic (PLEG): container finished" podID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerID="4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b" exitCode=0 Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.742855 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ng2z7" event={"ID":"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e","Type":"ContainerDied","Data":"4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b"} Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.757797 4835 scope.go:117] "RemoveContainer" containerID="25fdb854cbe1bf7efd7e7f32850a0d48ca8d03934de27955c9c0311a3869e9eb" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.787162 4835 scope.go:117] "RemoveContainer" containerID="c6c784d52b5c200fbc9c5b7fd427e7a9a01fe58abdfbe2cd4a7fa8dbd1de744a" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.792256 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tlf77" podStartSLOduration=2.8687939399999998 podStartE2EDuration="52.792233351s" podCreationTimestamp="2026-02-01 07:24:37 +0000 UTC" firstStartedPulling="2026-02-01 07:24:39.172335847 +0000 UTC m=+152.292772281" lastFinishedPulling="2026-02-01 07:25:29.095775268 +0000 UTC m=+202.216211692" observedRunningTime="2026-02-01 07:25:29.788112632 +0000 UTC m=+202.908549066" watchObservedRunningTime="2026-02-01 07:25:29.792233351 +0000 UTC m=+202.912669795" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.805114 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7n8wh"] Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.807941 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7n8wh"] Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.748842 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t677t" event={"ID":"835b2622-9047-4e3a-b019-6f15c5fd4566","Type":"ContainerStarted","Data":"e8d75b9cdb3185ff37877ed85d6d3372730274f7dbff223d7ea5c84fe296a601"} Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.752321 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ng2z7" event={"ID":"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e","Type":"ContainerStarted","Data":"78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297"} Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.754254 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbfbl" event={"ID":"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e","Type":"ContainerStarted","Data":"0eea26ae4bb5a1954f72fbcc75d1e7903480a69577a36065cf6a4254e3efba68"} Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.756164 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xx49" event={"ID":"602186bd-e71a-4ce1-ad39-c56495e815c3","Type":"ContainerStarted","Data":"9eb022e2135b0596e33429e62d1e55cd8a0be16a9faa993cffd3947dfd050b0a"} Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.773340 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t677t" podStartSLOduration=2.7895730629999997 podStartE2EDuration="55.773323101s" podCreationTimestamp="2026-02-01 07:24:35 +0000 UTC" firstStartedPulling="2026-02-01 07:24:37.140751248 +0000 UTC m=+150.261187702" lastFinishedPulling="2026-02-01 07:25:30.124501296 +0000 UTC m=+203.244937740" observedRunningTime="2026-02-01 07:25:30.771431006 +0000 UTC m=+203.891867440" watchObservedRunningTime="2026-02-01 07:25:30.773323101 +0000 UTC m=+203.893759535" Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.791708 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ng2z7" podStartSLOduration=2.680829851 podStartE2EDuration="55.791684532s" podCreationTimestamp="2026-02-01 07:24:35 +0000 UTC" firstStartedPulling="2026-02-01 07:24:37.134980436 +0000 UTC m=+150.255416870" lastFinishedPulling="2026-02-01 07:25:30.245835107 +0000 UTC m=+203.366271551" observedRunningTime="2026-02-01 07:25:30.789437627 +0000 UTC m=+203.909874081" watchObservedRunningTime="2026-02-01 07:25:30.791684532 +0000 UTC m=+203.912120966" Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.813228 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zbfbl" podStartSLOduration=2.740649783 podStartE2EDuration="55.813211255s" podCreationTimestamp="2026-02-01 07:24:35 +0000 UTC" firstStartedPulling="2026-02-01 07:24:37.138438157 +0000 UTC m=+150.258874591" lastFinishedPulling="2026-02-01 07:25:30.210999629 +0000 UTC m=+203.331436063" observedRunningTime="2026-02-01 07:25:30.8123585 +0000 UTC m=+203.932794934" watchObservedRunningTime="2026-02-01 07:25:30.813211255 +0000 UTC m=+203.933647689" Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.836272 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4xx49" podStartSLOduration=2.823414578 podStartE2EDuration="53.836256862s" podCreationTimestamp="2026-02-01 07:24:37 +0000 UTC" firstStartedPulling="2026-02-01 07:24:39.17206315 +0000 UTC m=+152.292499584" lastFinishedPulling="2026-02-01 07:25:30.184905434 +0000 UTC m=+203.305341868" observedRunningTime="2026-02-01 07:25:30.834827001 +0000 UTC m=+203.955263435" watchObservedRunningTime="2026-02-01 07:25:30.836256862 +0000 UTC m=+203.956693296" Feb 01 07:25:31 crc kubenswrapper[4835]: I0201 07:25:31.574093 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f562492e-dbf9-440e-978a-603956fc464e" path="/var/lib/kubelet/pods/f562492e-dbf9-440e-978a-603956fc464e/volumes" Feb 01 07:25:35 crc kubenswrapper[4835]: I0201 07:25:35.692652 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:25:35 crc kubenswrapper[4835]: I0201 07:25:35.693634 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:25:35 crc kubenswrapper[4835]: I0201 07:25:35.764114 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:25:35 crc kubenswrapper[4835]: I0201 07:25:35.842524 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:25:35 crc kubenswrapper[4835]: I0201 07:25:35.901546 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:25:35 crc kubenswrapper[4835]: I0201 07:25:35.901611 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:25:35 crc kubenswrapper[4835]: I0201 07:25:35.951721 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:25:36 crc kubenswrapper[4835]: I0201 07:25:36.315583 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:25:36 crc kubenswrapper[4835]: I0201 07:25:36.315627 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:25:36 crc kubenswrapper[4835]: I0201 07:25:36.393139 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:25:36 crc kubenswrapper[4835]: I0201 07:25:36.870202 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:25:36 crc kubenswrapper[4835]: I0201 07:25:36.870646 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:25:37 crc kubenswrapper[4835]: I0201 07:25:37.888588 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:25:37 crc kubenswrapper[4835]: I0201 07:25:37.891807 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:25:37 crc kubenswrapper[4835]: I0201 07:25:37.929561 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:25:38 crc kubenswrapper[4835]: I0201 07:25:38.318469 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:25:38 crc kubenswrapper[4835]: I0201 07:25:38.318914 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:25:38 crc kubenswrapper[4835]: I0201 07:25:38.384463 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:25:38 crc kubenswrapper[4835]: I0201 07:25:38.658273 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ng2z7"] Feb 01 07:25:38 crc kubenswrapper[4835]: I0201 07:25:38.811070 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ng2z7" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="registry-server" containerID="cri-o://78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297" gracePeriod=2 Feb 01 07:25:38 crc kubenswrapper[4835]: I0201 07:25:38.881103 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:25:38 crc kubenswrapper[4835]: I0201 07:25:38.882475 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.272190 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.454404 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-utilities\") pod \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.454815 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-catalog-content\") pod \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.454847 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c7w5\" (UniqueName: \"kubernetes.io/projected/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-kube-api-access-5c7w5\") pod \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.456531 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-utilities" (OuterVolumeSpecName: "utilities") pod "e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" (UID: "e3a136e2-3caa-4ed0-960a-6b6a0fdef39e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.462629 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-kube-api-access-5c7w5" (OuterVolumeSpecName: "kube-api-access-5c7w5") pod "e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" (UID: "e3a136e2-3caa-4ed0-960a-6b6a0fdef39e"). InnerVolumeSpecName "kube-api-access-5c7w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.542480 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" (UID: "e3a136e2-3caa-4ed0-960a-6b6a0fdef39e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.556521 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.556584 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.556601 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c7w5\" (UniqueName: \"kubernetes.io/projected/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-kube-api-access-5c7w5\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.820829 4835 generic.go:334] "Generic (PLEG): container finished" podID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerID="78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297" exitCode=0 Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.820902 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ng2z7" event={"ID":"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e","Type":"ContainerDied","Data":"78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297"} Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.820931 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.820952 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ng2z7" event={"ID":"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e","Type":"ContainerDied","Data":"c56ac053edf9fdbd97a44ab1c01dec3b54c9bd91c581423e5a21d7786e48591e"} Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.820978 4835 scope.go:117] "RemoveContainer" containerID="78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.844983 4835 scope.go:117] "RemoveContainer" containerID="4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.855008 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ng2z7"] Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.866717 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ng2z7"] Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.874472 4835 scope.go:117] "RemoveContainer" containerID="a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.908756 4835 scope.go:117] "RemoveContainer" containerID="78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297" Feb 01 07:25:39 crc kubenswrapper[4835]: E0201 07:25:39.909359 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297\": container with ID starting with 78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297 not found: ID does not exist" containerID="78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.909398 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297"} err="failed to get container status \"78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297\": rpc error: code = NotFound desc = could not find container \"78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297\": container with ID starting with 78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297 not found: ID does not exist" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.909574 4835 scope.go:117] "RemoveContainer" containerID="4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b" Feb 01 07:25:39 crc kubenswrapper[4835]: E0201 07:25:39.910455 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b\": container with ID starting with 4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b not found: ID does not exist" containerID="4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.910534 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b"} err="failed to get container status \"4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b\": rpc error: code = NotFound desc = could not find container \"4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b\": container with ID starting with 4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b not found: ID does not exist" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.910586 4835 scope.go:117] "RemoveContainer" containerID="a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f" Feb 01 07:25:39 crc kubenswrapper[4835]: E0201 07:25:39.911079 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f\": container with ID starting with a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f not found: ID does not exist" containerID="a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.911113 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f"} err="failed to get container status \"a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f\": rpc error: code = NotFound desc = could not find container \"a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f\": container with ID starting with a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f not found: ID does not exist" Feb 01 07:25:41 crc kubenswrapper[4835]: I0201 07:25:41.058812 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlf77"] Feb 01 07:25:41 crc kubenswrapper[4835]: I0201 07:25:41.577770 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" path="/var/lib/kubelet/pods/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e/volumes" Feb 01 07:25:41 crc kubenswrapper[4835]: I0201 07:25:41.837399 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tlf77" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="registry-server" containerID="cri-o://88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33" gracePeriod=2 Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.588094 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" podUID="62724c3f-5c92-4e77-ba3a-0f6b7215f48a" containerName="oauth-openshift" containerID="cri-o://3ce1b71be758dd076de182606cb238305ec470a936ab71da41c867e65c4d55e4" gracePeriod=15 Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.772126 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.846332 4835 generic.go:334] "Generic (PLEG): container finished" podID="9b287031-510c-410c-ade6-c2cf7a48e363" containerID="88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33" exitCode=0 Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.846437 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.846443 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlf77" event={"ID":"9b287031-510c-410c-ade6-c2cf7a48e363","Type":"ContainerDied","Data":"88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33"} Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.846558 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlf77" event={"ID":"9b287031-510c-410c-ade6-c2cf7a48e363","Type":"ContainerDied","Data":"50bb18dda4afd99c54bbc442fbcd2bb9c50ee2eb6dac4877186bb6aa56a4b49b"} Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.846594 4835 scope.go:117] "RemoveContainer" containerID="88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.850017 4835 generic.go:334] "Generic (PLEG): container finished" podID="62724c3f-5c92-4e77-ba3a-0f6b7215f48a" containerID="3ce1b71be758dd076de182606cb238305ec470a936ab71da41c867e65c4d55e4" exitCode=0 Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.850111 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" event={"ID":"62724c3f-5c92-4e77-ba3a-0f6b7215f48a","Type":"ContainerDied","Data":"3ce1b71be758dd076de182606cb238305ec470a936ab71da41c867e65c4d55e4"} Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.873985 4835 scope.go:117] "RemoveContainer" containerID="5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.893776 4835 scope.go:117] "RemoveContainer" containerID="3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.904158 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-catalog-content\") pod \"9b287031-510c-410c-ade6-c2cf7a48e363\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.904499 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blpp8\" (UniqueName: \"kubernetes.io/projected/9b287031-510c-410c-ade6-c2cf7a48e363-kube-api-access-blpp8\") pod \"9b287031-510c-410c-ade6-c2cf7a48e363\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.904536 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-utilities\") pod \"9b287031-510c-410c-ade6-c2cf7a48e363\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.905803 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-utilities" (OuterVolumeSpecName: "utilities") pod "9b287031-510c-410c-ade6-c2cf7a48e363" (UID: "9b287031-510c-410c-ade6-c2cf7a48e363"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.914210 4835 scope.go:117] "RemoveContainer" containerID="88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33" Feb 01 07:25:42 crc kubenswrapper[4835]: E0201 07:25:42.915085 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33\": container with ID starting with 88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33 not found: ID does not exist" containerID="88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.915151 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33"} err="failed to get container status \"88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33\": rpc error: code = NotFound desc = could not find container \"88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33\": container with ID starting with 88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33 not found: ID does not exist" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.915254 4835 scope.go:117] "RemoveContainer" containerID="5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd" Feb 01 07:25:42 crc kubenswrapper[4835]: E0201 07:25:42.915802 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd\": container with ID starting with 5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd not found: ID does not exist" containerID="5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.915846 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd"} err="failed to get container status \"5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd\": rpc error: code = NotFound desc = could not find container \"5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd\": container with ID starting with 5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd not found: ID does not exist" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.915875 4835 scope.go:117] "RemoveContainer" containerID="3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f" Feb 01 07:25:42 crc kubenswrapper[4835]: E0201 07:25:42.916143 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f\": container with ID starting with 3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f not found: ID does not exist" containerID="3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.916181 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f"} err="failed to get container status \"3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f\": rpc error: code = NotFound desc = could not find container \"3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f\": container with ID starting with 3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f not found: ID does not exist" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.924359 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b287031-510c-410c-ade6-c2cf7a48e363-kube-api-access-blpp8" (OuterVolumeSpecName: "kube-api-access-blpp8") pod "9b287031-510c-410c-ade6-c2cf7a48e363" (UID: "9b287031-510c-410c-ade6-c2cf7a48e363"). InnerVolumeSpecName "kube-api-access-blpp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.944332 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b287031-510c-410c-ade6-c2cf7a48e363" (UID: "9b287031-510c-410c-ade6-c2cf7a48e363"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.006665 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.006711 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blpp8\" (UniqueName: \"kubernetes.io/projected/9b287031-510c-410c-ade6-c2cf7a48e363-kube-api-access-blpp8\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.006727 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.111507 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.189389 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlf77"] Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.193471 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlf77"] Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.211863 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-session\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.211948 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-dir\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.211987 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-router-certs\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.212134 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-error\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.212185 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.212860 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-trusted-ca-bundle\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.212986 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-login\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213030 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-service-ca\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213126 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-provider-selection\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213174 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-serving-cert\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213218 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-ocp-branding-template\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213265 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-idp-0-file-data\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213312 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-cliconfig\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213352 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-policies\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213386 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nptzx\" (UniqueName: \"kubernetes.io/projected/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-kube-api-access-nptzx\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.214022 4835 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.214338 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.215258 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.217117 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.218880 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.219086 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-kube-api-access-nptzx" (OuterVolumeSpecName: "kube-api-access-nptzx") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "kube-api-access-nptzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.219886 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.220555 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.222026 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.222780 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.223024 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.223182 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.223535 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.224061 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315438 4835 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315512 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nptzx\" (UniqueName: \"kubernetes.io/projected/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-kube-api-access-nptzx\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315541 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315567 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315595 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315619 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315740 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315766 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315791 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315816 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315839 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315863 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315892 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.580220 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" path="/var/lib/kubelet/pods/9b287031-510c-410c-ade6-c2cf7a48e363/volumes" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.863605 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" event={"ID":"62724c3f-5c92-4e77-ba3a-0f6b7215f48a","Type":"ContainerDied","Data":"b228e669bd5b200a2abbd929c9ec6fc4843ea07663488a746bc7f94dc855f949"} Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.863682 4835 scope.go:117] "RemoveContainer" containerID="3ce1b71be758dd076de182606cb238305ec470a936ab71da41c867e65c4d55e4" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.863725 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.893295 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tkff4"] Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.901139 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tkff4"] Feb 01 07:25:45 crc kubenswrapper[4835]: I0201 07:25:45.577307 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62724c3f-5c92-4e77-ba3a-0f6b7215f48a" path="/var/lib/kubelet/pods/62724c3f-5c92-4e77-ba3a-0f6b7215f48a/volumes" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.011921 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-m8scc"] Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012362 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012376 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012389 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012397 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012424 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012433 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012446 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62724c3f-5c92-4e77-ba3a-0f6b7215f48a" containerName="oauth-openshift" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012453 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="62724c3f-5c92-4e77-ba3a-0f6b7215f48a" containerName="oauth-openshift" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012462 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012469 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012482 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012491 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012504 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012511 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012522 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012530 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012541 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012548 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012560 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012567 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012577 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012584 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012595 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012602 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012612 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012620 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012724 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012739 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012752 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012762 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="62724c3f-5c92-4e77-ba3a-0f6b7215f48a" containerName="oauth-openshift" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012773 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.013158 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.014758 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.017383 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.017968 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018077 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018198 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018465 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018516 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018560 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018584 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018589 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018740 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018773 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.029961 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.030953 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.032588 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-m8scc"] Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.035820 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099350 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099423 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099466 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099500 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-audit-policies\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099529 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed3a6b84-13c0-4752-8860-7c21ade20300-audit-dir\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099623 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099669 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brzb\" (UniqueName: \"kubernetes.io/projected/ed3a6b84-13c0-4752-8860-7c21ade20300-kube-api-access-4brzb\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099735 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099768 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099811 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099847 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099874 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099934 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099992 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.201344 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.201682 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.201812 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.201924 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202024 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202141 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202262 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202364 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202510 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-audit-policies\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202609 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202710 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed3a6b84-13c0-4752-8860-7c21ade20300-audit-dir\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202920 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.203017 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4brzb\" (UniqueName: \"kubernetes.io/projected/ed3a6b84-13c0-4752-8860-7c21ade20300-kube-api-access-4brzb\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.203120 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.203917 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.203937 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-audit-policies\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.203031 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed3a6b84-13c0-4752-8860-7c21ade20300-audit-dir\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.205132 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.205269 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.207894 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.208487 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.210188 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.210647 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.210660 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.211042 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.211229 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.229853 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.232582 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4brzb\" (UniqueName: \"kubernetes.io/projected/ed3a6b84-13c0-4752-8860-7c21ade20300-kube-api-access-4brzb\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.330398 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.767250 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-m8scc"] Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.907487 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-666545c866-m8scc" event={"ID":"ed3a6b84-13c0-4752-8860-7c21ade20300","Type":"ContainerStarted","Data":"420e8a96bdff4bd6618da9f051787277099fe11bdc768c8b035a266c0d084484"} Feb 01 07:25:51 crc kubenswrapper[4835]: I0201 07:25:51.914018 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-666545c866-m8scc" event={"ID":"ed3a6b84-13c0-4752-8860-7c21ade20300","Type":"ContainerStarted","Data":"ed77361171f748228589a117c7bf43f180d816bc8a8e9aa1299d4ca4762f6ca9"} Feb 01 07:25:51 crc kubenswrapper[4835]: I0201 07:25:51.914429 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:51 crc kubenswrapper[4835]: I0201 07:25:51.921355 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:51 crc kubenswrapper[4835]: I0201 07:25:51.942595 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-666545c866-m8scc" podStartSLOduration=34.942571979 podStartE2EDuration="34.942571979s" podCreationTimestamp="2026-02-01 07:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:25:51.939694686 +0000 UTC m=+225.060131160" watchObservedRunningTime="2026-02-01 07:25:51.942571979 +0000 UTC m=+225.063008453" Feb 01 07:25:55 crc kubenswrapper[4835]: I0201 07:25:55.191699 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:25:55 crc kubenswrapper[4835]: I0201 07:25:55.192179 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:25:55 crc kubenswrapper[4835]: I0201 07:25:55.192277 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:25:55 crc kubenswrapper[4835]: I0201 07:25:55.193632 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:25:55 crc kubenswrapper[4835]: I0201 07:25:55.193814 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5" gracePeriod=600 Feb 01 07:25:55 crc kubenswrapper[4835]: I0201 07:25:55.946009 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5" exitCode=0 Feb 01 07:25:55 crc kubenswrapper[4835]: I0201 07:25:55.946497 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5"} Feb 01 07:25:56 crc kubenswrapper[4835]: I0201 07:25:56.952969 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"9e3104eb77be3b50140e525cdfbf7f55a456b28fd34136df6dc0b6920b3a97bf"} Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.601735 4835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.603577 4835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.603669 4835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.603683 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: E0201 07:26:01.603904 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.603934 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2" gracePeriod=15 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604019 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4" gracePeriod=15 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.603964 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 01 07:26:01 crc kubenswrapper[4835]: E0201 07:26:01.604155 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604162 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604076 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54" gracePeriod=15 Feb 01 07:26:01 crc kubenswrapper[4835]: E0201 07:26:01.604174 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604181 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 01 07:26:01 crc kubenswrapper[4835]: E0201 07:26:01.604191 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604197 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 01 07:26:01 crc kubenswrapper[4835]: E0201 07:26:01.604205 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604211 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 07:26:01 crc kubenswrapper[4835]: E0201 07:26:01.604221 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604227 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 01 07:26:01 crc kubenswrapper[4835]: E0201 07:26:01.604236 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604241 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604222 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9" gracePeriod=15 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.603981 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94" gracePeriod=15 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604601 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604614 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604625 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604631 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604638 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604646 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.616644 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681465 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681537 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681578 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681612 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681633 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681649 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681675 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681704 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782469 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782813 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782856 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782881 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782900 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782931 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782931 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782971 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782992 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.783005 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782948 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782937 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782683 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.783036 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.783102 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.783204 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.986600 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.988185 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.989072 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9" exitCode=0 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.989114 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4" exitCode=0 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.989132 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94" exitCode=0 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.989146 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54" exitCode=2 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.989251 4835 scope.go:117] "RemoveContainer" containerID="39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.991112 4835 generic.go:334] "Generic (PLEG): container finished" podID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" containerID="3a62854f07efe9ee61bbc8b6cf4f08d0ff0e9a200d15a47492c6bdf618532148" exitCode=0 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.991148 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c9b454b8-f758-43d4-bd2b-93ebc807e06e","Type":"ContainerDied","Data":"3a62854f07efe9ee61bbc8b6cf4f08d0ff0e9a200d15a47492c6bdf618532148"} Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.992000 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.002465 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.348810 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.350032 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.406461 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kube-api-access\") pod \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.406656 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kubelet-dir\") pod \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.406752 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-var-lock\") pod \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.406784 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c9b454b8-f758-43d4-bd2b-93ebc807e06e" (UID: "c9b454b8-f758-43d4-bd2b-93ebc807e06e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.406907 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-var-lock" (OuterVolumeSpecName: "var-lock") pod "c9b454b8-f758-43d4-bd2b-93ebc807e06e" (UID: "c9b454b8-f758-43d4-bd2b-93ebc807e06e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.407163 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.407198 4835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-var-lock\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.412503 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c9b454b8-f758-43d4-bd2b-93ebc807e06e" (UID: "c9b454b8-f758-43d4-bd2b-93ebc807e06e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.508075 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.967551 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.969486 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.970334 4835 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.970694 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.011307 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.013339 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2" exitCode=0 Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.013432 4835 scope.go:117] "RemoveContainer" containerID="7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.013750 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.016660 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c9b454b8-f758-43d4-bd2b-93ebc807e06e","Type":"ContainerDied","Data":"2997edb8ab02ca7e2da0f4120bdf3140a5e44974f1c0d9270cf560bcceec34c4"} Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.016710 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.016712 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2997edb8ab02ca7e2da0f4120bdf3140a5e44974f1c0d9270cf560bcceec34c4" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.024682 4835 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.025338 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.032464 4835 scope.go:117] "RemoveContainer" containerID="0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.057433 4835 scope.go:117] "RemoveContainer" containerID="02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.076217 4835 scope.go:117] "RemoveContainer" containerID="6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.090153 4835 scope.go:117] "RemoveContainer" containerID="7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.107846 4835 scope.go:117] "RemoveContainer" containerID="fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.119568 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.119602 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.119622 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.119783 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.119818 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.119879 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.127882 4835 scope.go:117] "RemoveContainer" containerID="7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9" Feb 01 07:26:04 crc kubenswrapper[4835]: E0201 07:26:04.128399 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\": container with ID starting with 7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9 not found: ID does not exist" containerID="7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.128458 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9"} err="failed to get container status \"7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\": rpc error: code = NotFound desc = could not find container \"7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\": container with ID starting with 7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9 not found: ID does not exist" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.128490 4835 scope.go:117] "RemoveContainer" containerID="0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4" Feb 01 07:26:04 crc kubenswrapper[4835]: E0201 07:26:04.128953 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\": container with ID starting with 0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4 not found: ID does not exist" containerID="0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.129021 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4"} err="failed to get container status \"0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\": rpc error: code = NotFound desc = could not find container \"0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\": container with ID starting with 0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4 not found: ID does not exist" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.129066 4835 scope.go:117] "RemoveContainer" containerID="02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94" Feb 01 07:26:04 crc kubenswrapper[4835]: E0201 07:26:04.129475 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\": container with ID starting with 02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94 not found: ID does not exist" containerID="02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.129537 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94"} err="failed to get container status \"02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\": rpc error: code = NotFound desc = could not find container \"02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\": container with ID starting with 02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94 not found: ID does not exist" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.129560 4835 scope.go:117] "RemoveContainer" containerID="6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54" Feb 01 07:26:04 crc kubenswrapper[4835]: E0201 07:26:04.129898 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\": container with ID starting with 6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54 not found: ID does not exist" containerID="6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.129929 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54"} err="failed to get container status \"6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\": rpc error: code = NotFound desc = could not find container \"6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\": container with ID starting with 6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54 not found: ID does not exist" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.129950 4835 scope.go:117] "RemoveContainer" containerID="7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2" Feb 01 07:26:04 crc kubenswrapper[4835]: E0201 07:26:04.131582 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\": container with ID starting with 7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2 not found: ID does not exist" containerID="7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.131626 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2"} err="failed to get container status \"7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\": rpc error: code = NotFound desc = could not find container \"7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\": container with ID starting with 7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2 not found: ID does not exist" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.131653 4835 scope.go:117] "RemoveContainer" containerID="fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17" Feb 01 07:26:04 crc kubenswrapper[4835]: E0201 07:26:04.131945 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\": container with ID starting with fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17 not found: ID does not exist" containerID="fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.131979 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17"} err="failed to get container status \"fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\": rpc error: code = NotFound desc = could not find container \"fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\": container with ID starting with fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17 not found: ID does not exist" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.221513 4835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.221562 4835 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.221582 4835 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.345803 4835 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.347383 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:05 crc kubenswrapper[4835]: I0201 07:26:05.576456 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 01 07:26:06 crc kubenswrapper[4835]: E0201 07:26:06.630258 4835 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:06 crc kubenswrapper[4835]: I0201 07:26:06.630839 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:06 crc kubenswrapper[4835]: W0201 07:26:06.664134 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-e3f00328f377ed3182987ee1fc3cb2b673847b03106247434b8862766fc0c12e WatchSource:0}: Error finding container e3f00328f377ed3182987ee1fc3cb2b673847b03106247434b8862766fc0c12e: Status 404 returned error can't find the container with id e3f00328f377ed3182987ee1fc3cb2b673847b03106247434b8862766fc0c12e Feb 01 07:26:06 crc kubenswrapper[4835]: E0201 07:26:06.668900 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18900ea7a0495361 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-01 07:26:06.668092257 +0000 UTC m=+239.788528701,LastTimestamp:2026-02-01 07:26:06.668092257 +0000 UTC m=+239.788528701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 01 07:26:07 crc kubenswrapper[4835]: I0201 07:26:07.043164 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e3f00328f377ed3182987ee1fc3cb2b673847b03106247434b8862766fc0c12e"} Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.275963 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.276734 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.277252 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.277643 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.277981 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:07 crc kubenswrapper[4835]: I0201 07:26:07.278019 4835 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.278289 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="200ms" Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.479635 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="400ms" Feb 01 07:26:07 crc kubenswrapper[4835]: I0201 07:26:07.571202 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.880390 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="800ms" Feb 01 07:26:08 crc kubenswrapper[4835]: I0201 07:26:08.049707 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17"} Feb 01 07:26:08 crc kubenswrapper[4835]: E0201 07:26:08.050367 4835 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:08 crc kubenswrapper[4835]: I0201 07:26:08.050450 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:08 crc kubenswrapper[4835]: E0201 07:26:08.682662 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="1.6s" Feb 01 07:26:09 crc kubenswrapper[4835]: E0201 07:26:09.056048 4835 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:10 crc kubenswrapper[4835]: E0201 07:26:10.284315 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="3.2s" Feb 01 07:26:13 crc kubenswrapper[4835]: E0201 07:26:13.485296 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="6.4s" Feb 01 07:26:15 crc kubenswrapper[4835]: I0201 07:26:15.115317 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 01 07:26:15 crc kubenswrapper[4835]: I0201 07:26:15.115901 4835 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976" exitCode=1 Feb 01 07:26:15 crc kubenswrapper[4835]: I0201 07:26:15.115973 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976"} Feb 01 07:26:15 crc kubenswrapper[4835]: I0201 07:26:15.116850 4835 scope.go:117] "RemoveContainer" containerID="611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976" Feb 01 07:26:15 crc kubenswrapper[4835]: I0201 07:26:15.117312 4835 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:15 crc kubenswrapper[4835]: I0201 07:26:15.119207 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:15 crc kubenswrapper[4835]: E0201 07:26:15.355964 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18900ea7a0495361 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-01 07:26:06.668092257 +0000 UTC m=+239.788528701,LastTimestamp:2026-02-01 07:26:06.668092257 +0000 UTC m=+239.788528701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.129964 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.130029 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5364c9f6974aec68b47b5c8588927fc1bdaf21f74470ab2cc89a5b9b958550d1"} Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.131272 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.135902 4835 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.566160 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.567235 4835 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.567806 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.595567 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.595612 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:16 crc kubenswrapper[4835]: E0201 07:26:16.596114 4835 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.596796 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:16 crc kubenswrapper[4835]: W0201 07:26:16.630487 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-424b95721b8ff83eb1d7a89e372612a506e3ed06544a4575ea99d675e8375e10 WatchSource:0}: Error finding container 424b95721b8ff83eb1d7a89e372612a506e3ed06544a4575ea99d675e8375e10: Status 404 returned error can't find the container with id 424b95721b8ff83eb1d7a89e372612a506e3ed06544a4575ea99d675e8375e10 Feb 01 07:26:17 crc kubenswrapper[4835]: I0201 07:26:17.140723 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"424b95721b8ff83eb1d7a89e372612a506e3ed06544a4575ea99d675e8375e10"} Feb 01 07:26:17 crc kubenswrapper[4835]: I0201 07:26:17.576679 4835 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:17 crc kubenswrapper[4835]: I0201 07:26:17.577564 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:17 crc kubenswrapper[4835]: I0201 07:26:17.578225 4835 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:17 crc kubenswrapper[4835]: E0201 07:26:17.626937 4835 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" volumeName="registry-storage" Feb 01 07:26:18 crc kubenswrapper[4835]: I0201 07:26:18.149500 4835 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8f5ee58ffbe76f6e65ebe195b8d37780bccd06ec6bb269f7ccb020979e4a5319" exitCode=0 Feb 01 07:26:18 crc kubenswrapper[4835]: I0201 07:26:18.149562 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8f5ee58ffbe76f6e65ebe195b8d37780bccd06ec6bb269f7ccb020979e4a5319"} Feb 01 07:26:18 crc kubenswrapper[4835]: I0201 07:26:18.149973 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:18 crc kubenswrapper[4835]: I0201 07:26:18.150005 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:18 crc kubenswrapper[4835]: E0201 07:26:18.150665 4835 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:18 crc kubenswrapper[4835]: I0201 07:26:18.150705 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:18 crc kubenswrapper[4835]: I0201 07:26:18.152926 4835 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:18 crc kubenswrapper[4835]: I0201 07:26:18.153246 4835 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:19 crc kubenswrapper[4835]: I0201 07:26:19.157776 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"41018aa6d57af6044d17ccc0cd9b1534c7b683e85dd94c4545cf962c6e45ce32"} Feb 01 07:26:19 crc kubenswrapper[4835]: I0201 07:26:19.157828 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3fbe5aa7ad6849137c02b3975e4460c2e6b49cf85f8c2c9aaf5fbaddc98d6847"} Feb 01 07:26:20 crc kubenswrapper[4835]: I0201 07:26:20.166190 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"531fdfc73b147d204745e7871dce2d6953f2d35502e7aab35e3c8a76e339f3b0"} Feb 01 07:26:20 crc kubenswrapper[4835]: I0201 07:26:20.166441 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:20 crc kubenswrapper[4835]: I0201 07:26:20.166451 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"56450161e526e8ef8619f7267049ecf198eb6f7b0e0ba0ae4630b39bf76fc521"} Feb 01 07:26:20 crc kubenswrapper[4835]: I0201 07:26:20.166462 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5b5e77a08d11bf0fafb64750d1d51714e22d6d88f0f2d58561f93feddfed02d5"} Feb 01 07:26:20 crc kubenswrapper[4835]: I0201 07:26:20.166496 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:20 crc kubenswrapper[4835]: I0201 07:26:20.166513 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:20 crc kubenswrapper[4835]: I0201 07:26:20.653471 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:26:21 crc kubenswrapper[4835]: I0201 07:26:21.597954 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:21 crc kubenswrapper[4835]: I0201 07:26:21.598665 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:21 crc kubenswrapper[4835]: I0201 07:26:21.612009 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:23 crc kubenswrapper[4835]: I0201 07:26:23.376748 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:26:23 crc kubenswrapper[4835]: I0201 07:26:23.386074 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:26:25 crc kubenswrapper[4835]: I0201 07:26:25.200910 4835 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:25 crc kubenswrapper[4835]: I0201 07:26:25.240185 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:26:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:26:18Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:26:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:26:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fbe5aa7ad6849137c02b3975e4460c2e6b49cf85f8c2c9aaf5fbaddc98d6847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5e77a08d11bf0fafb64750d1d51714e22d6d88f0f2d58561f93feddfed02d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:26:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41018aa6d57af6044d17ccc0cd9b1534c7b683e85dd94c4545cf962c6e45ce32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531fdfc73b147d204745e7871dce2d6953f2d35502e7aab35e3c8a76e339f3b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:26:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56450161e526e8ef8619f7267049ecf198eb6f7b0e0ba0ae4630b39bf76fc521\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:26:19Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f5ee58ffbe76f6e65ebe195b8d37780bccd06ec6bb269f7ccb020979e4a5319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5ee58ffbe76f6e65ebe195b8d37780bccd06ec6bb269f7ccb020979e4a5319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:26:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:26:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"87ff5368-06f9-4f47-b5bb-e5916283dec7\": field is immutable" Feb 01 07:26:25 crc kubenswrapper[4835]: I0201 07:26:25.286009 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="695946f9-9c64-47c4-aada-771c48dbcef9" Feb 01 07:26:26 crc kubenswrapper[4835]: I0201 07:26:26.213203 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:26 crc kubenswrapper[4835]: I0201 07:26:26.213525 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:26 crc kubenswrapper[4835]: I0201 07:26:26.219481 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="695946f9-9c64-47c4-aada-771c48dbcef9" Feb 01 07:26:26 crc kubenswrapper[4835]: I0201 07:26:26.219757 4835 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://3fbe5aa7ad6849137c02b3975e4460c2e6b49cf85f8c2c9aaf5fbaddc98d6847" Feb 01 07:26:26 crc kubenswrapper[4835]: I0201 07:26:26.219789 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:27 crc kubenswrapper[4835]: I0201 07:26:27.218974 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:27 crc kubenswrapper[4835]: I0201 07:26:27.219015 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:27 crc kubenswrapper[4835]: I0201 07:26:27.223245 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="695946f9-9c64-47c4-aada-771c48dbcef9" Feb 01 07:26:30 crc kubenswrapper[4835]: I0201 07:26:30.658202 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:26:35 crc kubenswrapper[4835]: I0201 07:26:35.568986 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 01 07:26:35 crc kubenswrapper[4835]: I0201 07:26:35.734813 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 01 07:26:35 crc kubenswrapper[4835]: I0201 07:26:35.860578 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 01 07:26:35 crc kubenswrapper[4835]: I0201 07:26:35.897009 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 01 07:26:35 crc kubenswrapper[4835]: I0201 07:26:35.972318 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 01 07:26:36 crc kubenswrapper[4835]: I0201 07:26:36.075677 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 01 07:26:36 crc kubenswrapper[4835]: I0201 07:26:36.166058 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 01 07:26:36 crc kubenswrapper[4835]: I0201 07:26:36.976012 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.055504 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.143271 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.304504 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.412822 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.423944 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.447820 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.529751 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.582848 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.768353 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.819121 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.945484 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.103993 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.125595 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.261331 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.264347 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.267272 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.283643 4835 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.291604 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.291711 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.300640 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.316348 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.316331998 podStartE2EDuration="13.316331998s" podCreationTimestamp="2026-02-01 07:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:26:38.312990788 +0000 UTC m=+271.433427262" watchObservedRunningTime="2026-02-01 07:26:38.316331998 +0000 UTC m=+271.436768432" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.402470 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.621957 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.677657 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.809878 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.869059 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.062702 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.069929 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.084475 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.199639 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.211407 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.258941 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.263858 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.272712 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.273694 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.286368 4835 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.288193 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.320280 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.335464 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.372359 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.419231 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.451524 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.464274 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.484315 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.559336 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.594637 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.601170 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.636512 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.702731 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.751793 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.753052 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.798342 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.803492 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.926747 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.933100 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.952937 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.965374 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.966838 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.979402 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.047917 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.137838 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.329563 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.331233 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.526041 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.545989 4835 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.567820 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.756835 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.788502 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.791214 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.867559 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.907034 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.988698 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.051239 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.052949 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.055460 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.171922 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.175151 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.282151 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.302771 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.309690 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.360128 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.460091 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.628942 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.671367 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.714340 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.806008 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.839821 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.862140 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.878142 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.974726 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.985469 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.048844 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.091839 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.170531 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.183942 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.185600 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.200257 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.213929 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.331375 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.350525 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.480274 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.599348 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.616221 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.632279 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.641138 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.651749 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.684027 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.803022 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.870503 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.936687 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.951144 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.024831 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.032139 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.096684 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.262544 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.270029 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.289231 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.319085 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.359372 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.401342 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.453795 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.453954 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.470941 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.471062 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.497523 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.515372 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.668624 4835 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.697630 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.786987 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.787332 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.808590 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.829596 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.917613 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.940100 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.013265 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.041572 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.171125 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.182460 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.187288 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.215488 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.232880 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.261498 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.366485 4835 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.388680 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.392221 4835 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.451862 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.487719 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.492842 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.598387 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.618468 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.705607 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.819173 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.823426 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.827899 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.847584 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.002494 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.027936 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.094543 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.111439 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.177031 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.188296 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.302070 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.302743 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.423904 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.439175 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.503393 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.515207 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.626057 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.963962 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.034966 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.067219 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.074587 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.091683 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.094092 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.139744 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.227215 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.293080 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.303531 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.327922 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.396129 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.433599 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.482988 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.522426 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.600966 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.678559 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.722908 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.725887 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.769929 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.811218 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.847425 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.870145 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.945392 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.945434 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.124734 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.130139 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.212460 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.319460 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.342660 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.434024 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.445617 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.485685 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.636906 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.671044 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.752857 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.769240 4835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.769539 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17" gracePeriod=5 Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.797577 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.824054 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.879093 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 01 07:26:48 crc kubenswrapper[4835]: I0201 07:26:48.200684 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 01 07:26:48 crc kubenswrapper[4835]: I0201 07:26:48.231220 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 01 07:26:48 crc kubenswrapper[4835]: I0201 07:26:48.594316 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 01 07:26:48 crc kubenswrapper[4835]: I0201 07:26:48.822177 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 01 07:26:48 crc kubenswrapper[4835]: I0201 07:26:48.894709 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.026998 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.043071 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.056283 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.133097 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.133797 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.147804 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.232633 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.442617 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.470322 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.494049 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.508745 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.738356 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.740574 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.969745 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 01 07:26:50 crc kubenswrapper[4835]: I0201 07:26:50.378684 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 01 07:26:50 crc kubenswrapper[4835]: I0201 07:26:50.424836 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 01 07:26:50 crc kubenswrapper[4835]: I0201 07:26:50.555170 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 01 07:26:50 crc kubenswrapper[4835]: I0201 07:26:50.641650 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 01 07:26:50 crc kubenswrapper[4835]: I0201 07:26:50.703861 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 01 07:26:50 crc kubenswrapper[4835]: I0201 07:26:50.895787 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 01 07:26:51 crc kubenswrapper[4835]: I0201 07:26:51.022319 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 01 07:26:51 crc kubenswrapper[4835]: I0201 07:26:51.048451 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 01 07:26:51 crc kubenswrapper[4835]: I0201 07:26:51.317965 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 01 07:26:51 crc kubenswrapper[4835]: I0201 07:26:51.816696 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.772880 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zbfbl"] Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.776136 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zbfbl" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="registry-server" containerID="cri-o://0eea26ae4bb5a1954f72fbcc75d1e7903480a69577a36065cf6a4254e3efba68" gracePeriod=30 Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.788517 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t677t"] Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.789493 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t677t" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="registry-server" containerID="cri-o://e8d75b9cdb3185ff37877ed85d6d3372730274f7dbff223d7ea5c84fe296a601" gracePeriod=30 Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.801776 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjg6g"] Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.802244 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerName="marketplace-operator" containerID="cri-o://aec701259e552f23dfcf4e9cf051bfbdb52a72d9c0db034b350a2330451e632f" gracePeriod=30 Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.829703 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xx49"] Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.830171 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4xx49" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="registry-server" containerID="cri-o://9eb022e2135b0596e33429e62d1e55cd8a0be16a9faa993cffd3947dfd050b0a" gracePeriod=30 Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.844291 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7hk7"] Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.851164 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s7hk7" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="registry-server" containerID="cri-o://9cd63e168f5ee1bba32762ea60b5535c14b22b6a31b98e3419ead8dd99d4331a" gracePeriod=30 Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.369625 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.369734 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.407551 4835 generic.go:334] "Generic (PLEG): container finished" podID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerID="0eea26ae4bb5a1954f72fbcc75d1e7903480a69577a36065cf6a4254e3efba68" exitCode=0 Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.407657 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbfbl" event={"ID":"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e","Type":"ContainerDied","Data":"0eea26ae4bb5a1954f72fbcc75d1e7903480a69577a36065cf6a4254e3efba68"} Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.411608 4835 generic.go:334] "Generic (PLEG): container finished" podID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerID="9eb022e2135b0596e33429e62d1e55cd8a0be16a9faa993cffd3947dfd050b0a" exitCode=0 Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.411739 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xx49" event={"ID":"602186bd-e71a-4ce1-ad39-c56495e815c3","Type":"ContainerDied","Data":"9eb022e2135b0596e33429e62d1e55cd8a0be16a9faa993cffd3947dfd050b0a"} Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.415229 4835 generic.go:334] "Generic (PLEG): container finished" podID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerID="e8d75b9cdb3185ff37877ed85d6d3372730274f7dbff223d7ea5c84fe296a601" exitCode=0 Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.415494 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t677t" event={"ID":"835b2622-9047-4e3a-b019-6f15c5fd4566","Type":"ContainerDied","Data":"e8d75b9cdb3185ff37877ed85d6d3372730274f7dbff223d7ea5c84fe296a601"} Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.419005 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.419094 4835 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17" exitCode=137 Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.419257 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.419506 4835 scope.go:117] "RemoveContainer" containerID="4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.421721 4835 generic.go:334] "Generic (PLEG): container finished" podID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerID="aec701259e552f23dfcf4e9cf051bfbdb52a72d9c0db034b350a2330451e632f" exitCode=0 Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.421816 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" event={"ID":"8615180e-fc31-41b2-ad59-5ae2e48af5a2","Type":"ContainerDied","Data":"aec701259e552f23dfcf4e9cf051bfbdb52a72d9c0db034b350a2330451e632f"} Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.425545 4835 generic.go:334] "Generic (PLEG): container finished" podID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerID="9cd63e168f5ee1bba32762ea60b5535c14b22b6a31b98e3419ead8dd99d4331a" exitCode=0 Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.425589 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7hk7" event={"ID":"2e2bb332-ae2b-4ef7-90b2-79928bf7407b","Type":"ContainerDied","Data":"9cd63e168f5ee1bba32762ea60b5535c14b22b6a31b98e3419ead8dd99d4331a"} Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.450012 4835 scope.go:117] "RemoveContainer" containerID="4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17" Feb 01 07:26:53 crc kubenswrapper[4835]: E0201 07:26:53.450827 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17\": container with ID starting with 4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17 not found: ID does not exist" containerID="4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.450894 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17"} err="failed to get container status \"4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17\": rpc error: code = NotFound desc = could not find container \"4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17\": container with ID starting with 4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17 not found: ID does not exist" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.510496 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.510573 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.510688 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.510716 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.510745 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.511223 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.511297 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.511324 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.512328 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.519699 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.575382 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.611944 4835 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.611975 4835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.611985 4835 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.611993 4835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.612002 4835 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.724677 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.778779 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.791695 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.798420 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.849220 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918111 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcvsr\" (UniqueName: \"kubernetes.io/projected/602186bd-e71a-4ce1-ad39-c56495e815c3-kube-api-access-fcvsr\") pod \"602186bd-e71a-4ce1-ad39-c56495e815c3\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918163 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh6bn\" (UniqueName: \"kubernetes.io/projected/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-kube-api-access-wh6bn\") pod \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918186 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhft7\" (UniqueName: \"kubernetes.io/projected/8615180e-fc31-41b2-ad59-5ae2e48af5a2-kube-api-access-jhft7\") pod \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918210 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-catalog-content\") pod \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918234 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-catalog-content\") pod \"835b2622-9047-4e3a-b019-6f15c5fd4566\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918253 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-utilities\") pod \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918269 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-catalog-content\") pod \"602186bd-e71a-4ce1-ad39-c56495e815c3\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918300 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-trusted-ca\") pod \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918322 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-utilities\") pod \"602186bd-e71a-4ce1-ad39-c56495e815c3\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918377 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-utilities\") pod \"835b2622-9047-4e3a-b019-6f15c5fd4566\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918424 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-operator-metrics\") pod \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918451 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k72t5\" (UniqueName: \"kubernetes.io/projected/835b2622-9047-4e3a-b019-6f15c5fd4566-kube-api-access-k72t5\") pod \"835b2622-9047-4e3a-b019-6f15c5fd4566\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.919776 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-utilities" (OuterVolumeSpecName: "utilities") pod "835b2622-9047-4e3a-b019-6f15c5fd4566" (UID: "835b2622-9047-4e3a-b019-6f15c5fd4566"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.919930 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8615180e-fc31-41b2-ad59-5ae2e48af5a2" (UID: "8615180e-fc31-41b2-ad59-5ae2e48af5a2"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.920510 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-utilities" (OuterVolumeSpecName: "utilities") pod "602186bd-e71a-4ce1-ad39-c56495e815c3" (UID: "602186bd-e71a-4ce1-ad39-c56495e815c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.921424 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-utilities" (OuterVolumeSpecName: "utilities") pod "7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" (UID: "7a177b30-3240-49d8-b0c5-b74f8e8f4c7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.923032 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/835b2622-9047-4e3a-b019-6f15c5fd4566-kube-api-access-k72t5" (OuterVolumeSpecName: "kube-api-access-k72t5") pod "835b2622-9047-4e3a-b019-6f15c5fd4566" (UID: "835b2622-9047-4e3a-b019-6f15c5fd4566"). InnerVolumeSpecName "kube-api-access-k72t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.923663 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8615180e-fc31-41b2-ad59-5ae2e48af5a2" (UID: "8615180e-fc31-41b2-ad59-5ae2e48af5a2"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.925497 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8615180e-fc31-41b2-ad59-5ae2e48af5a2-kube-api-access-jhft7" (OuterVolumeSpecName: "kube-api-access-jhft7") pod "8615180e-fc31-41b2-ad59-5ae2e48af5a2" (UID: "8615180e-fc31-41b2-ad59-5ae2e48af5a2"). InnerVolumeSpecName "kube-api-access-jhft7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.929961 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-kube-api-access-wh6bn" (OuterVolumeSpecName: "kube-api-access-wh6bn") pod "7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" (UID: "7a177b30-3240-49d8-b0c5-b74f8e8f4c7e"). InnerVolumeSpecName "kube-api-access-wh6bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.936572 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602186bd-e71a-4ce1-ad39-c56495e815c3-kube-api-access-fcvsr" (OuterVolumeSpecName: "kube-api-access-fcvsr") pod "602186bd-e71a-4ce1-ad39-c56495e815c3" (UID: "602186bd-e71a-4ce1-ad39-c56495e815c3"). InnerVolumeSpecName "kube-api-access-fcvsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.952222 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "602186bd-e71a-4ce1-ad39-c56495e815c3" (UID: "602186bd-e71a-4ce1-ad39-c56495e815c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.981321 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "835b2622-9047-4e3a-b019-6f15c5fd4566" (UID: "835b2622-9047-4e3a-b019-6f15c5fd4566"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.992718 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" (UID: "7a177b30-3240-49d8-b0c5-b74f8e8f4c7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019476 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97wl9\" (UniqueName: \"kubernetes.io/projected/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-kube-api-access-97wl9\") pod \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019597 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-catalog-content\") pod \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019623 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-utilities\") pod \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019835 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019852 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019864 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k72t5\" (UniqueName: \"kubernetes.io/projected/835b2622-9047-4e3a-b019-6f15c5fd4566-kube-api-access-k72t5\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019873 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcvsr\" (UniqueName: \"kubernetes.io/projected/602186bd-e71a-4ce1-ad39-c56495e815c3-kube-api-access-fcvsr\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019883 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh6bn\" (UniqueName: \"kubernetes.io/projected/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-kube-api-access-wh6bn\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019910 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhft7\" (UniqueName: \"kubernetes.io/projected/8615180e-fc31-41b2-ad59-5ae2e48af5a2-kube-api-access-jhft7\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019919 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019927 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019934 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019942 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019950 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019958 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.020708 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-utilities" (OuterVolumeSpecName: "utilities") pod "2e2bb332-ae2b-4ef7-90b2-79928bf7407b" (UID: "2e2bb332-ae2b-4ef7-90b2-79928bf7407b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.024230 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-kube-api-access-97wl9" (OuterVolumeSpecName: "kube-api-access-97wl9") pod "2e2bb332-ae2b-4ef7-90b2-79928bf7407b" (UID: "2e2bb332-ae2b-4ef7-90b2-79928bf7407b"). InnerVolumeSpecName "kube-api-access-97wl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.121459 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.121810 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97wl9\" (UniqueName: \"kubernetes.io/projected/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-kube-api-access-97wl9\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.144250 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e2bb332-ae2b-4ef7-90b2-79928bf7407b" (UID: "2e2bb332-ae2b-4ef7-90b2-79928bf7407b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.223533 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.435701 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbfbl" event={"ID":"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e","Type":"ContainerDied","Data":"34d744c0f2118911ec3770b8a37e279293e3d0075191d345f7ef2f24b56383a6"} Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.435737 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.435777 4835 scope.go:117] "RemoveContainer" containerID="0eea26ae4bb5a1954f72fbcc75d1e7903480a69577a36065cf6a4254e3efba68" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.440536 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.440538 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xx49" event={"ID":"602186bd-e71a-4ce1-ad39-c56495e815c3","Type":"ContainerDied","Data":"dea430e052099dd47c2c324f9a18af947b95755e422272ec8bbff41882bef5e5"} Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.444668 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t677t" event={"ID":"835b2622-9047-4e3a-b019-6f15c5fd4566","Type":"ContainerDied","Data":"8633807aa4c1b4534aedf9236769294f25ed6ac597e2c0fda34cf924f7b62039"} Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.444701 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.448649 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.448687 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" event={"ID":"8615180e-fc31-41b2-ad59-5ae2e48af5a2","Type":"ContainerDied","Data":"756ac183cdf318bae9818cbd3f3e4f67346c6974661fa7194394a92f9755088e"} Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.452696 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7hk7" event={"ID":"2e2bb332-ae2b-4ef7-90b2-79928bf7407b","Type":"ContainerDied","Data":"46b5cafa1f07b5021e9e78fc5e6be54cf12c37d6cc9f28c581409330362b0959"} Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.452888 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.476699 4835 scope.go:117] "RemoveContainer" containerID="7fde970c7809bb8c50b149f97b8907cd34e5ed3f92e53b3f48046bec959d09ef" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.498614 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zbfbl"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.503146 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zbfbl"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.523969 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t677t"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.527171 4835 scope.go:117] "RemoveContainer" containerID="eac60a2bcfc7a27f8cce064694d441e59039265b959d26823af533d85c7dcf10" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.533890 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t677t"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.538290 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xx49"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.546236 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xx49"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.552020 4835 scope.go:117] "RemoveContainer" containerID="9eb022e2135b0596e33429e62d1e55cd8a0be16a9faa993cffd3947dfd050b0a" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.554883 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7hk7"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.559399 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s7hk7"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.563085 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjg6g"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.566318 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjg6g"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.575112 4835 scope.go:117] "RemoveContainer" containerID="b1f0e4a7c799308902bb8e0217a0c30fdd02e1a32fd2564302d2a528cea8ba75" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.592897 4835 scope.go:117] "RemoveContainer" containerID="b14cf051de6ab1294efac8b8b8e42b820cf594040b129fc04b183d93a8efbf57" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.607178 4835 scope.go:117] "RemoveContainer" containerID="e8d75b9cdb3185ff37877ed85d6d3372730274f7dbff223d7ea5c84fe296a601" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.632277 4835 scope.go:117] "RemoveContainer" containerID="1b7f8d984d304fa16176f9ff67b5f5c30b1244ad6e8dd4e1ef20f9098a0f7fe2" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.653208 4835 scope.go:117] "RemoveContainer" containerID="7270b81f0145b4123ee2f475f3f90b8aa11e59eef5e948db9ab2c46452e1838a" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.671854 4835 scope.go:117] "RemoveContainer" containerID="aec701259e552f23dfcf4e9cf051bfbdb52a72d9c0db034b350a2330451e632f" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.689648 4835 scope.go:117] "RemoveContainer" containerID="9cd63e168f5ee1bba32762ea60b5535c14b22b6a31b98e3419ead8dd99d4331a" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.704709 4835 scope.go:117] "RemoveContainer" containerID="deccbf5bf47273db8305d287368e84a9555304937b617c52aaad45a3c56162a2" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.726834 4835 scope.go:117] "RemoveContainer" containerID="d5974ea84742510757e055f310d0049c446f1e2fe023968cfe1b5034d72af99c" Feb 01 07:26:55 crc kubenswrapper[4835]: I0201 07:26:55.578031 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" path="/var/lib/kubelet/pods/2e2bb332-ae2b-4ef7-90b2-79928bf7407b/volumes" Feb 01 07:26:55 crc kubenswrapper[4835]: I0201 07:26:55.580059 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" path="/var/lib/kubelet/pods/602186bd-e71a-4ce1-ad39-c56495e815c3/volumes" Feb 01 07:26:55 crc kubenswrapper[4835]: I0201 07:26:55.581915 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" path="/var/lib/kubelet/pods/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e/volumes" Feb 01 07:26:55 crc kubenswrapper[4835]: I0201 07:26:55.584688 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" path="/var/lib/kubelet/pods/835b2622-9047-4e3a-b019-6f15c5fd4566/volumes" Feb 01 07:26:55 crc kubenswrapper[4835]: I0201 07:26:55.586355 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" path="/var/lib/kubelet/pods/8615180e-fc31-41b2-ad59-5ae2e48af5a2/volumes" Feb 01 07:27:05 crc kubenswrapper[4835]: I0201 07:27:05.325492 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 01 07:27:06 crc kubenswrapper[4835]: I0201 07:27:06.620542 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 01 07:27:07 crc kubenswrapper[4835]: I0201 07:27:07.334924 4835 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 01 07:27:10 crc kubenswrapper[4835]: I0201 07:27:10.817513 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 01 07:27:13 crc kubenswrapper[4835]: I0201 07:27:13.626029 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.551644 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hpgql"] Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.551849 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" podUID="79f19c84-0217-4b08-8b4d-663096ce67b4" containerName="controller-manager" containerID="cri-o://46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624" gracePeriod=30 Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.671431 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt"] Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.671721 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" podUID="46f4b60b-0076-4087-b541-4617c3752687" containerName="route-controller-manager" containerID="cri-o://d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c" gracePeriod=30 Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.887725 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.908226 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-proxy-ca-bundles\") pod \"79f19c84-0217-4b08-8b4d-663096ce67b4\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.908282 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-client-ca\") pod \"79f19c84-0217-4b08-8b4d-663096ce67b4\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.908327 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-config\") pod \"79f19c84-0217-4b08-8b4d-663096ce67b4\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.908355 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttz52\" (UniqueName: \"kubernetes.io/projected/79f19c84-0217-4b08-8b4d-663096ce67b4-kube-api-access-ttz52\") pod \"79f19c84-0217-4b08-8b4d-663096ce67b4\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.908431 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f19c84-0217-4b08-8b4d-663096ce67b4-serving-cert\") pod \"79f19c84-0217-4b08-8b4d-663096ce67b4\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.909346 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "79f19c84-0217-4b08-8b4d-663096ce67b4" (UID: "79f19c84-0217-4b08-8b4d-663096ce67b4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.909713 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-config" (OuterVolumeSpecName: "config") pod "79f19c84-0217-4b08-8b4d-663096ce67b4" (UID: "79f19c84-0217-4b08-8b4d-663096ce67b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.910080 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-client-ca" (OuterVolumeSpecName: "client-ca") pod "79f19c84-0217-4b08-8b4d-663096ce67b4" (UID: "79f19c84-0217-4b08-8b4d-663096ce67b4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.917189 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f19c84-0217-4b08-8b4d-663096ce67b4-kube-api-access-ttz52" (OuterVolumeSpecName: "kube-api-access-ttz52") pod "79f19c84-0217-4b08-8b4d-663096ce67b4" (UID: "79f19c84-0217-4b08-8b4d-663096ce67b4"). InnerVolumeSpecName "kube-api-access-ttz52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.919304 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f19c84-0217-4b08-8b4d-663096ce67b4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "79f19c84-0217-4b08-8b4d-663096ce67b4" (UID: "79f19c84-0217-4b08-8b4d-663096ce67b4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.965881 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009356 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-client-ca\") pod \"46f4b60b-0076-4087-b541-4617c3752687\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009449 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4b60b-0076-4087-b541-4617c3752687-serving-cert\") pod \"46f4b60b-0076-4087-b541-4617c3752687\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009485 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qckj9\" (UniqueName: \"kubernetes.io/projected/46f4b60b-0076-4087-b541-4617c3752687-kube-api-access-qckj9\") pod \"46f4b60b-0076-4087-b541-4617c3752687\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009541 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-config\") pod \"46f4b60b-0076-4087-b541-4617c3752687\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009743 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009758 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009770 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttz52\" (UniqueName: \"kubernetes.io/projected/79f19c84-0217-4b08-8b4d-663096ce67b4-kube-api-access-ttz52\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009781 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f19c84-0217-4b08-8b4d-663096ce67b4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009793 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.010687 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-config" (OuterVolumeSpecName: "config") pod "46f4b60b-0076-4087-b541-4617c3752687" (UID: "46f4b60b-0076-4087-b541-4617c3752687"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.011343 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-client-ca" (OuterVolumeSpecName: "client-ca") pod "46f4b60b-0076-4087-b541-4617c3752687" (UID: "46f4b60b-0076-4087-b541-4617c3752687"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.014759 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f4b60b-0076-4087-b541-4617c3752687-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "46f4b60b-0076-4087-b541-4617c3752687" (UID: "46f4b60b-0076-4087-b541-4617c3752687"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.015352 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f4b60b-0076-4087-b541-4617c3752687-kube-api-access-qckj9" (OuterVolumeSpecName: "kube-api-access-qckj9") pod "46f4b60b-0076-4087-b541-4617c3752687" (UID: "46f4b60b-0076-4087-b541-4617c3752687"). InnerVolumeSpecName "kube-api-access-qckj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.111275 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.111310 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4b60b-0076-4087-b541-4617c3752687-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.111320 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qckj9\" (UniqueName: \"kubernetes.io/projected/46f4b60b-0076-4087-b541-4617c3752687-kube-api-access-qckj9\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.111330 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.568803 4835 generic.go:334] "Generic (PLEG): container finished" podID="79f19c84-0217-4b08-8b4d-663096ce67b4" containerID="46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624" exitCode=0 Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.568873 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" event={"ID":"79f19c84-0217-4b08-8b4d-663096ce67b4","Type":"ContainerDied","Data":"46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624"} Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.568894 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.568915 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" event={"ID":"79f19c84-0217-4b08-8b4d-663096ce67b4","Type":"ContainerDied","Data":"88a43a32aeb11a7266228e44e96343168e4ad3f4bf296e26425609793a59a308"} Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.568936 4835 scope.go:117] "RemoveContainer" containerID="46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.572740 4835 generic.go:334] "Generic (PLEG): container finished" podID="46f4b60b-0076-4087-b541-4617c3752687" containerID="d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c" exitCode=0 Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.572794 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" event={"ID":"46f4b60b-0076-4087-b541-4617c3752687","Type":"ContainerDied","Data":"d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c"} Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.572857 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" event={"ID":"46f4b60b-0076-4087-b541-4617c3752687","Type":"ContainerDied","Data":"eaca48a7b94d929256f67ed77a297ce26bfbe10f609a2d3253d4e4ba2b33d879"} Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.572802 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.602369 4835 scope.go:117] "RemoveContainer" containerID="46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624" Feb 01 07:27:16 crc kubenswrapper[4835]: E0201 07:27:16.602882 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624\": container with ID starting with 46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624 not found: ID does not exist" containerID="46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.602925 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624"} err="failed to get container status \"46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624\": rpc error: code = NotFound desc = could not find container \"46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624\": container with ID starting with 46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624 not found: ID does not exist" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.602949 4835 scope.go:117] "RemoveContainer" containerID="d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.615322 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt"] Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.621574 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt"] Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.624461 4835 scope.go:117] "RemoveContainer" containerID="d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c" Feb 01 07:27:16 crc kubenswrapper[4835]: E0201 07:27:16.624941 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c\": container with ID starting with d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c not found: ID does not exist" containerID="d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.624976 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c"} err="failed to get container status \"d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c\": rpc error: code = NotFound desc = could not find container \"d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c\": container with ID starting with d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c not found: ID does not exist" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.626768 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hpgql"] Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.629824 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hpgql"] Feb 01 07:27:17 crc kubenswrapper[4835]: I0201 07:27:17.084928 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 01 07:27:17 crc kubenswrapper[4835]: I0201 07:27:17.590863 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f4b60b-0076-4087-b541-4617c3752687" path="/var/lib/kubelet/pods/46f4b60b-0076-4087-b541-4617c3752687/volumes" Feb 01 07:27:17 crc kubenswrapper[4835]: I0201 07:27:17.592045 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79f19c84-0217-4b08-8b4d-663096ce67b4" path="/var/lib/kubelet/pods/79f19c84-0217-4b08-8b4d-663096ce67b4/volumes" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.042916 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-www9n"] Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043281 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043303 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043328 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043340 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043358 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043370 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043385 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043397 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043482 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f4b60b-0076-4087-b541-4617c3752687" containerName="route-controller-manager" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043496 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f4b60b-0076-4087-b541-4617c3752687" containerName="route-controller-manager" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043515 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043527 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043544 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f19c84-0217-4b08-8b4d-663096ce67b4" containerName="controller-manager" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043557 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f19c84-0217-4b08-8b4d-663096ce67b4" containerName="controller-manager" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043573 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043585 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043603 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043614 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043627 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043639 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043654 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerName="marketplace-operator" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043667 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerName="marketplace-operator" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043682 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043695 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043713 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043725 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043742 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043754 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043772 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043784 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043797 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" containerName="installer" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043813 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" containerName="installer" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043828 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043841 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043993 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f4b60b-0076-4087-b541-4617c3752687" containerName="route-controller-manager" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044015 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044033 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044054 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044072 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044090 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" containerName="installer" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044102 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044117 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f19c84-0217-4b08-8b4d-663096ce67b4" containerName="controller-manager" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044131 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerName="marketplace-operator" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044852 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.047462 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.049149 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx"] Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.049610 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.050128 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.051018 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.053857 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.053949 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.053861 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.055777 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6dcg\" (UniqueName: \"kubernetes.io/projected/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-kube-api-access-s6dcg\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.055858 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2481990-b703-4792-b5b0-549daf22e66a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.055933 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2p6\" (UniqueName: \"kubernetes.io/projected/c2481990-b703-4792-b5b0-549daf22e66a-kube-api-access-gq2p6\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.056024 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-client-ca\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.056083 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c2481990-b703-4792-b5b0-549daf22e66a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.056143 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-config\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.056227 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-serving-cert\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.057063 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.057400 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.057405 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.057757 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.088248 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.157820 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6dcg\" (UniqueName: \"kubernetes.io/projected/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-kube-api-access-s6dcg\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.157883 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2481990-b703-4792-b5b0-549daf22e66a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.157915 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq2p6\" (UniqueName: \"kubernetes.io/projected/c2481990-b703-4792-b5b0-549daf22e66a-kube-api-access-gq2p6\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.157965 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c2481990-b703-4792-b5b0-549daf22e66a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.157999 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-client-ca\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.158035 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-config\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.158087 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-serving-cert\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.159547 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2481990-b703-4792-b5b0-549daf22e66a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.159969 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-client-ca\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.161739 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-config\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.163841 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-serving-cert\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.163920 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c2481990-b703-4792-b5b0-549daf22e66a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.181271 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq2p6\" (UniqueName: \"kubernetes.io/projected/c2481990-b703-4792-b5b0-549daf22e66a-kube-api-access-gq2p6\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.199752 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6dcg\" (UniqueName: \"kubernetes.io/projected/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-kube-api-access-s6dcg\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.374005 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.385624 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.488576 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.072372 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84fd975466-sxqz2"] Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.073747 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.076392 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.078040 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.078321 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.078485 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.078495 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.078573 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.098345 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.171886 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-client-ca\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.171951 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-proxy-ca-bundles\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.171985 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10fad4bf-1fb3-4455-a349-fefb7f585c30-serving-cert\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.172226 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t58dp\" (UniqueName: \"kubernetes.io/projected/10fad4bf-1fb3-4455-a349-fefb7f585c30-kube-api-access-t58dp\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.172314 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-config\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.273208 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-config\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.273351 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-client-ca\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.273395 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-proxy-ca-bundles\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.273451 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10fad4bf-1fb3-4455-a349-fefb7f585c30-serving-cert\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.273485 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t58dp\" (UniqueName: \"kubernetes.io/projected/10fad4bf-1fb3-4455-a349-fefb7f585c30-kube-api-access-t58dp\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.275305 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-client-ca\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.275588 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-proxy-ca-bundles\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.275815 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-config\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.291533 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10fad4bf-1fb3-4455-a349-fefb7f585c30-serving-cert\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.292693 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t58dp\" (UniqueName: \"kubernetes.io/projected/10fad4bf-1fb3-4455-a349-fefb7f585c30-kube-api-access-t58dp\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.395944 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.536337 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.700281 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wqgsq"] Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.702234 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.704708 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.781174 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb5bbc9-0e87-45ed-897f-6e343be075d5-catalog-content\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.781235 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb5bbc9-0e87-45ed-897f-6e343be075d5-utilities\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.781295 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqnvm\" (UniqueName: \"kubernetes.io/projected/5cb5bbc9-0e87-45ed-897f-6e343be075d5-kube-api-access-nqnvm\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.882780 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb5bbc9-0e87-45ed-897f-6e343be075d5-catalog-content\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.882860 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb5bbc9-0e87-45ed-897f-6e343be075d5-utilities\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.882956 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqnvm\" (UniqueName: \"kubernetes.io/projected/5cb5bbc9-0e87-45ed-897f-6e343be075d5-kube-api-access-nqnvm\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.883803 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb5bbc9-0e87-45ed-897f-6e343be075d5-catalog-content\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.883959 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb5bbc9-0e87-45ed-897f-6e343be075d5-utilities\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.914359 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqnvm\" (UniqueName: \"kubernetes.io/projected/5cb5bbc9-0e87-45ed-897f-6e343be075d5-kube-api-access-nqnvm\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.031942 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.088360 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5blqv"] Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.090064 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.098533 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.186311 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-utilities\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.186446 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62w42\" (UniqueName: \"kubernetes.io/projected/48972eb7-80de-4d1a-b9c1-adf412bd3531-kube-api-access-62w42\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.186498 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-catalog-content\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.287383 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-catalog-content\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.287822 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-utilities\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.287982 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62w42\" (UniqueName: \"kubernetes.io/projected/48972eb7-80de-4d1a-b9c1-adf412bd3531-kube-api-access-62w42\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.288248 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-catalog-content\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.288811 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-utilities\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.314059 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62w42\" (UniqueName: \"kubernetes.io/projected/48972eb7-80de-4d1a-b9c1-adf412bd3531-kube-api-access-62w42\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.414164 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.747785 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84fd975466-sxqz2"] Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.762761 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wqgsq"] Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.778011 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx"] Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.785183 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5blqv"] Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.791476 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-www9n"] Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.829019 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.835649 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.094754 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx"] Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.113094 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-www9n"] Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.359076 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wqgsq"] Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.362428 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84fd975466-sxqz2"] Feb 01 07:27:22 crc kubenswrapper[4835]: W0201 07:27:22.365836 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cb5bbc9_0e87_45ed_897f_6e343be075d5.slice/crio-67b09ed4b7203d63d56d1fda7350e7801f601d8f630976f5ef65ea2f803d3d7b WatchSource:0}: Error finding container 67b09ed4b7203d63d56d1fda7350e7801f601d8f630976f5ef65ea2f803d3d7b: Status 404 returned error can't find the container with id 67b09ed4b7203d63d56d1fda7350e7801f601d8f630976f5ef65ea2f803d3d7b Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.365838 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5blqv"] Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.614322 4835 generic.go:334] "Generic (PLEG): container finished" podID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerID="d5e2f5d1534650a4cf1433bf132faf98e02e52decf048ace44fbb7b0f61e32fe" exitCode=0 Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.614624 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5blqv" event={"ID":"48972eb7-80de-4d1a-b9c1-adf412bd3531","Type":"ContainerDied","Data":"d5e2f5d1534650a4cf1433bf132faf98e02e52decf048ace44fbb7b0f61e32fe"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.614649 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5blqv" event={"ID":"48972eb7-80de-4d1a-b9c1-adf412bd3531","Type":"ContainerStarted","Data":"a97613cab5446cbb6022f66ef99ec2081a9134140b311ba32389e80a2e221cbc"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.618050 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" event={"ID":"10fad4bf-1fb3-4455-a349-fefb7f585c30","Type":"ContainerStarted","Data":"6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.618103 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" event={"ID":"10fad4bf-1fb3-4455-a349-fefb7f585c30","Type":"ContainerStarted","Data":"6f7177eac40a95ebcf05587a253beb2d53eb30227ac5417fca9d9b44c2b17f2d"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.618501 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.619765 4835 patch_prober.go:28] interesting pod/controller-manager-84fd975466-sxqz2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.619803 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" podUID="10fad4bf-1fb3-4455-a349-fefb7f585c30" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.619976 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" event={"ID":"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace","Type":"ContainerStarted","Data":"0bfcdb682af11bf82610dd655dbb0dbb6c99ca021d689f94b4283cc1ec45a205"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.619998 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" event={"ID":"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace","Type":"ContainerStarted","Data":"214fe92092a097c47af795f796c7ad8ad3e488a2461c92c690c4b5b33c211332"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.620573 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.624372 4835 generic.go:334] "Generic (PLEG): container finished" podID="5cb5bbc9-0e87-45ed-897f-6e343be075d5" containerID="b848e8c631552b1f89d6940b9b5fb1525c09a3eff4768433cb972cfe507ad540" exitCode=0 Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.624434 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqgsq" event={"ID":"5cb5bbc9-0e87-45ed-897f-6e343be075d5","Type":"ContainerDied","Data":"b848e8c631552b1f89d6940b9b5fb1525c09a3eff4768433cb972cfe507ad540"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.624470 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqgsq" event={"ID":"5cb5bbc9-0e87-45ed-897f-6e343be075d5","Type":"ContainerStarted","Data":"67b09ed4b7203d63d56d1fda7350e7801f601d8f630976f5ef65ea2f803d3d7b"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.626214 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-www9n" event={"ID":"c2481990-b703-4792-b5b0-549daf22e66a","Type":"ContainerStarted","Data":"55d8a61bd08b899d611ab4873dc003c8f3aa530c72b7031f33331e3ae5509f09"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.626298 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-www9n" event={"ID":"c2481990-b703-4792-b5b0-549daf22e66a","Type":"ContainerStarted","Data":"fa4397b21d379cbe699a0409b625a1bc94cda8f719e36a18f2b4f429654336a9"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.627183 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.629563 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.651300 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" podStartSLOduration=7.651280775 podStartE2EDuration="7.651280775s" podCreationTimestamp="2026-02-01 07:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:27:22.648817759 +0000 UTC m=+315.769254193" watchObservedRunningTime="2026-02-01 07:27:22.651280775 +0000 UTC m=+315.771717209" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.669781 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-www9n" podStartSLOduration=30.669761172 podStartE2EDuration="30.669761172s" podCreationTimestamp="2026-02-01 07:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:27:22.666860224 +0000 UTC m=+315.787296658" watchObservedRunningTime="2026-02-01 07:27:22.669761172 +0000 UTC m=+315.790197626" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.699511 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" podStartSLOduration=7.69947409 podStartE2EDuration="7.69947409s" podCreationTimestamp="2026-02-01 07:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:27:22.697068016 +0000 UTC m=+315.817504450" watchObservedRunningTime="2026-02-01 07:27:22.69947409 +0000 UTC m=+315.819910534" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.816319 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.301293 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ghmxq"] Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.302977 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.306086 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghmxq"] Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.306186 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.417268 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0155c2ce-1bd0-424d-931f-132c22e7a42e-utilities\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.417332 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlpmn\" (UniqueName: \"kubernetes.io/projected/0155c2ce-1bd0-424d-931f-132c22e7a42e-kube-api-access-dlpmn\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.417471 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0155c2ce-1bd0-424d-931f-132c22e7a42e-catalog-content\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.486093 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-75mhs"] Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.488194 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.493211 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-75mhs"] Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.493850 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.518545 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlpmn\" (UniqueName: \"kubernetes.io/projected/0155c2ce-1bd0-424d-931f-132c22e7a42e-kube-api-access-dlpmn\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.518638 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0155c2ce-1bd0-424d-931f-132c22e7a42e-catalog-content\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.518658 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0155c2ce-1bd0-424d-931f-132c22e7a42e-utilities\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.519127 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0155c2ce-1bd0-424d-931f-132c22e7a42e-utilities\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.519629 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0155c2ce-1bd0-424d-931f-132c22e7a42e-catalog-content\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.545519 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlpmn\" (UniqueName: \"kubernetes.io/projected/0155c2ce-1bd0-424d-931f-132c22e7a42e-kube-api-access-dlpmn\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.606736 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.620047 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fead728-7b7f-4ee9-b01e-455d536a88c5-catalog-content\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.620144 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fead728-7b7f-4ee9-b01e-455d536a88c5-utilities\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.620192 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl6bf\" (UniqueName: \"kubernetes.io/projected/5fead728-7b7f-4ee9-b01e-455d536a88c5-kube-api-access-wl6bf\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.640602 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.640634 4835 generic.go:334] "Generic (PLEG): container finished" podID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerID="00639fbfdc8c05a878182afacfc54aac4d6d97d80b8d202f1d59fcc0b702129d" exitCode=0 Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.640951 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5blqv" event={"ID":"48972eb7-80de-4d1a-b9c1-adf412bd3531","Type":"ContainerDied","Data":"00639fbfdc8c05a878182afacfc54aac4d6d97d80b8d202f1d59fcc0b702129d"} Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.644753 4835 generic.go:334] "Generic (PLEG): container finished" podID="5cb5bbc9-0e87-45ed-897f-6e343be075d5" containerID="c0aa03511aa1ec11ff61e924a59a70fe4a8671768f13fe80bccd019c9f867dfe" exitCode=0 Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.645659 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqgsq" event={"ID":"5cb5bbc9-0e87-45ed-897f-6e343be075d5","Type":"ContainerDied","Data":"c0aa03511aa1ec11ff61e924a59a70fe4a8671768f13fe80bccd019c9f867dfe"} Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.658040 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.722467 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fead728-7b7f-4ee9-b01e-455d536a88c5-utilities\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.722934 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fead728-7b7f-4ee9-b01e-455d536a88c5-utilities\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.723067 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl6bf\" (UniqueName: \"kubernetes.io/projected/5fead728-7b7f-4ee9-b01e-455d536a88c5-kube-api-access-wl6bf\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.723234 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fead728-7b7f-4ee9-b01e-455d536a88c5-catalog-content\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.723673 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fead728-7b7f-4ee9-b01e-455d536a88c5-catalog-content\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.761381 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl6bf\" (UniqueName: \"kubernetes.io/projected/5fead728-7b7f-4ee9-b01e-455d536a88c5-kube-api-access-wl6bf\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.810028 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.915502 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghmxq"] Feb 01 07:27:23 crc kubenswrapper[4835]: W0201 07:27:23.923369 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0155c2ce_1bd0_424d_931f_132c22e7a42e.slice/crio-f845d2547c4126eb18b128ebeeafa39644f96775618df5688ad77e4e3f29c39d WatchSource:0}: Error finding container f845d2547c4126eb18b128ebeeafa39644f96775618df5688ad77e4e3f29c39d: Status 404 returned error can't find the container with id f845d2547c4126eb18b128ebeeafa39644f96775618df5688ad77e4e3f29c39d Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.027424 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-75mhs"] Feb 01 07:27:24 crc kubenswrapper[4835]: W0201 07:27:24.036209 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fead728_7b7f_4ee9_b01e_455d536a88c5.slice/crio-62f4b19fdb4c0830e59e7999d04de98106aa9561055a94a7befd8ae86378d63b WatchSource:0}: Error finding container 62f4b19fdb4c0830e59e7999d04de98106aa9561055a94a7befd8ae86378d63b: Status 404 returned error can't find the container with id 62f4b19fdb4c0830e59e7999d04de98106aa9561055a94a7befd8ae86378d63b Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.656426 4835 generic.go:334] "Generic (PLEG): container finished" podID="0155c2ce-1bd0-424d-931f-132c22e7a42e" containerID="83f242ee4bd070b393af829ec7bc10d6cbec9cfb20d3c5696c271f8ab3b1cf03" exitCode=0 Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.656495 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghmxq" event={"ID":"0155c2ce-1bd0-424d-931f-132c22e7a42e","Type":"ContainerDied","Data":"83f242ee4bd070b393af829ec7bc10d6cbec9cfb20d3c5696c271f8ab3b1cf03"} Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.656524 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghmxq" event={"ID":"0155c2ce-1bd0-424d-931f-132c22e7a42e","Type":"ContainerStarted","Data":"f845d2547c4126eb18b128ebeeafa39644f96775618df5688ad77e4e3f29c39d"} Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.659149 4835 generic.go:334] "Generic (PLEG): container finished" podID="5fead728-7b7f-4ee9-b01e-455d536a88c5" containerID="86d803fc4c63848e36bb959eaf3a1fce37d7cbdddaa9fbcb8d7849cca6cbdf42" exitCode=0 Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.659353 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75mhs" event={"ID":"5fead728-7b7f-4ee9-b01e-455d536a88c5","Type":"ContainerDied","Data":"86d803fc4c63848e36bb959eaf3a1fce37d7cbdddaa9fbcb8d7849cca6cbdf42"} Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.659382 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75mhs" event={"ID":"5fead728-7b7f-4ee9-b01e-455d536a88c5","Type":"ContainerStarted","Data":"62f4b19fdb4c0830e59e7999d04de98106aa9561055a94a7befd8ae86378d63b"} Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.664384 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5blqv" event={"ID":"48972eb7-80de-4d1a-b9c1-adf412bd3531","Type":"ContainerStarted","Data":"4b4accff2f1a20d0e288fd1c22d16a0996201d0dc3273c256de8cfeb83f7a5c2"} Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.667201 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqgsq" event={"ID":"5cb5bbc9-0e87-45ed-897f-6e343be075d5","Type":"ContainerStarted","Data":"c119a3eab261e08dc8aeb835a550bc135c657074ffe17856b559cc9a58f6f021"} Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.696713 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5blqv" podStartSLOduration=2.197844143 podStartE2EDuration="3.69669841s" podCreationTimestamp="2026-02-01 07:27:21 +0000 UTC" firstStartedPulling="2026-02-01 07:27:22.615664648 +0000 UTC m=+315.736101082" lastFinishedPulling="2026-02-01 07:27:24.114518915 +0000 UTC m=+317.234955349" observedRunningTime="2026-02-01 07:27:24.694247044 +0000 UTC m=+317.814683498" watchObservedRunningTime="2026-02-01 07:27:24.69669841 +0000 UTC m=+317.817134844" Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.759350 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wqgsq" podStartSLOduration=3.310643734 podStartE2EDuration="4.759332383s" podCreationTimestamp="2026-02-01 07:27:20 +0000 UTC" firstStartedPulling="2026-02-01 07:27:22.625711978 +0000 UTC m=+315.746148412" lastFinishedPulling="2026-02-01 07:27:24.074400627 +0000 UTC m=+317.194837061" observedRunningTime="2026-02-01 07:27:24.756710932 +0000 UTC m=+317.877147376" watchObservedRunningTime="2026-02-01 07:27:24.759332383 +0000 UTC m=+317.879768817" Feb 01 07:27:25 crc kubenswrapper[4835]: I0201 07:27:25.673590 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75mhs" event={"ID":"5fead728-7b7f-4ee9-b01e-455d536a88c5","Type":"ContainerStarted","Data":"d2483b22d57573b85ac190f9ef9e6d4d021206c16a4b9a259ca21ce3bc676263"} Feb 01 07:27:25 crc kubenswrapper[4835]: I0201 07:27:25.675210 4835 generic.go:334] "Generic (PLEG): container finished" podID="0155c2ce-1bd0-424d-931f-132c22e7a42e" containerID="a0f3a8b184b1495ee75f611ba885b2af17d82d78a688a542ffbc9c5ecdd9a195" exitCode=0 Feb 01 07:27:25 crc kubenswrapper[4835]: I0201 07:27:25.675258 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghmxq" event={"ID":"0155c2ce-1bd0-424d-931f-132c22e7a42e","Type":"ContainerDied","Data":"a0f3a8b184b1495ee75f611ba885b2af17d82d78a688a542ffbc9c5ecdd9a195"} Feb 01 07:27:26 crc kubenswrapper[4835]: I0201 07:27:26.687535 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghmxq" event={"ID":"0155c2ce-1bd0-424d-931f-132c22e7a42e","Type":"ContainerStarted","Data":"48a2b36849fc2c37227697cfeb9e8dbaba66864ba3562dfccc288b1f01746ed4"} Feb 01 07:27:26 crc kubenswrapper[4835]: I0201 07:27:26.692171 4835 generic.go:334] "Generic (PLEG): container finished" podID="5fead728-7b7f-4ee9-b01e-455d536a88c5" containerID="d2483b22d57573b85ac190f9ef9e6d4d021206c16a4b9a259ca21ce3bc676263" exitCode=0 Feb 01 07:27:26 crc kubenswrapper[4835]: I0201 07:27:26.692230 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75mhs" event={"ID":"5fead728-7b7f-4ee9-b01e-455d536a88c5","Type":"ContainerDied","Data":"d2483b22d57573b85ac190f9ef9e6d4d021206c16a4b9a259ca21ce3bc676263"} Feb 01 07:27:26 crc kubenswrapper[4835]: I0201 07:27:26.716010 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ghmxq" podStartSLOduration=2.273221061 podStartE2EDuration="3.715992072s" podCreationTimestamp="2026-02-01 07:27:23 +0000 UTC" firstStartedPulling="2026-02-01 07:27:24.658199815 +0000 UTC m=+317.778636259" lastFinishedPulling="2026-02-01 07:27:26.100970796 +0000 UTC m=+319.221407270" observedRunningTime="2026-02-01 07:27:26.71591708 +0000 UTC m=+319.836353514" watchObservedRunningTime="2026-02-01 07:27:26.715992072 +0000 UTC m=+319.836428516" Feb 01 07:27:26 crc kubenswrapper[4835]: I0201 07:27:26.748184 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 01 07:27:27 crc kubenswrapper[4835]: I0201 07:27:27.700321 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75mhs" event={"ID":"5fead728-7b7f-4ee9-b01e-455d536a88c5","Type":"ContainerStarted","Data":"5b87fd0df43de20c1c8f6d921d84e6080d54e5d2cadfd41bc826c9b5485e7b95"} Feb 01 07:27:27 crc kubenswrapper[4835]: I0201 07:27:27.719847 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-75mhs" podStartSLOduration=2.268787224 podStartE2EDuration="4.719834508s" podCreationTimestamp="2026-02-01 07:27:23 +0000 UTC" firstStartedPulling="2026-02-01 07:27:24.66059634 +0000 UTC m=+317.781032784" lastFinishedPulling="2026-02-01 07:27:27.111643594 +0000 UTC m=+320.232080068" observedRunningTime="2026-02-01 07:27:27.717780372 +0000 UTC m=+320.838216806" watchObservedRunningTime="2026-02-01 07:27:27.719834508 +0000 UTC m=+320.840270942" Feb 01 07:27:28 crc kubenswrapper[4835]: I0201 07:27:28.205689 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.032439 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.034362 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.103862 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.414595 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.414802 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.486686 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.794878 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.800646 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:33 crc kubenswrapper[4835]: I0201 07:27:33.641274 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:33 crc kubenswrapper[4835]: I0201 07:27:33.641699 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:33 crc kubenswrapper[4835]: I0201 07:27:33.708466 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:33 crc kubenswrapper[4835]: I0201 07:27:33.803213 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:33 crc kubenswrapper[4835]: I0201 07:27:33.810614 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:33 crc kubenswrapper[4835]: I0201 07:27:33.811729 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:34 crc kubenswrapper[4835]: I0201 07:27:34.859122 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-75mhs" podUID="5fead728-7b7f-4ee9-b01e-455d536a88c5" containerName="registry-server" probeResult="failure" output=< Feb 01 07:27:34 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 01 07:27:34 crc kubenswrapper[4835]: > Feb 01 07:27:43 crc kubenswrapper[4835]: I0201 07:27:43.879217 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:43 crc kubenswrapper[4835]: I0201 07:27:43.953084 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.415713 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vf2w6"] Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.416824 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.435386 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vf2w6"] Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.512776 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-bound-sa-token\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.512863 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.512912 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrhwr\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-kube-api-access-nrhwr\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.513010 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.513072 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-trusted-ca\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.513107 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.513169 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-registry-tls\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.513219 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-registry-certificates\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.553466 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.614461 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-registry-tls\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.614550 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-registry-certificates\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.614610 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-bound-sa-token\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.614716 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.614803 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrhwr\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-kube-api-access-nrhwr\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.614876 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-trusted-ca\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.614933 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.615742 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.616815 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-trusted-ca\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.617387 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-registry-certificates\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.621767 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-registry-tls\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.622371 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.641372 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrhwr\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-kube-api-access-nrhwr\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.643032 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-bound-sa-token\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.751840 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:59 crc kubenswrapper[4835]: I0201 07:27:59.213374 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vf2w6"] Feb 01 07:27:59 crc kubenswrapper[4835]: I0201 07:27:59.914836 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" event={"ID":"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d","Type":"ContainerStarted","Data":"071e2509ffd8cb3110efedb31c69070508669e9d5876115c0fa6fd27f476f51b"} Feb 01 07:27:59 crc kubenswrapper[4835]: I0201 07:27:59.914887 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" event={"ID":"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d","Type":"ContainerStarted","Data":"d61341de0abb514b232bd4985f15ba4a6fa226486179fedabf3cd9d55c8ac98f"} Feb 01 07:27:59 crc kubenswrapper[4835]: I0201 07:27:59.915047 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:59 crc kubenswrapper[4835]: I0201 07:27:59.949024 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" podStartSLOduration=1.9490044709999998 podStartE2EDuration="1.949004471s" podCreationTimestamp="2026-02-01 07:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:27:59.943505023 +0000 UTC m=+353.063941467" watchObservedRunningTime="2026-02-01 07:27:59.949004471 +0000 UTC m=+353.069440915" Feb 01 07:28:15 crc kubenswrapper[4835]: I0201 07:28:15.505188 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84fd975466-sxqz2"] Feb 01 07:28:15 crc kubenswrapper[4835]: I0201 07:28:15.505920 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" podUID="10fad4bf-1fb3-4455-a349-fefb7f585c30" containerName="controller-manager" containerID="cri-o://6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba" gracePeriod=30 Feb 01 07:28:15 crc kubenswrapper[4835]: I0201 07:28:15.928706 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.013215 4835 generic.go:334] "Generic (PLEG): container finished" podID="10fad4bf-1fb3-4455-a349-fefb7f585c30" containerID="6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba" exitCode=0 Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.013295 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.013352 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" event={"ID":"10fad4bf-1fb3-4455-a349-fefb7f585c30","Type":"ContainerDied","Data":"6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba"} Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.013974 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" event={"ID":"10fad4bf-1fb3-4455-a349-fefb7f585c30","Type":"ContainerDied","Data":"6f7177eac40a95ebcf05587a253beb2d53eb30227ac5417fca9d9b44c2b17f2d"} Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.013997 4835 scope.go:117] "RemoveContainer" containerID="6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.027748 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10fad4bf-1fb3-4455-a349-fefb7f585c30-serving-cert\") pod \"10fad4bf-1fb3-4455-a349-fefb7f585c30\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.027863 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-client-ca\") pod \"10fad4bf-1fb3-4455-a349-fefb7f585c30\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.027917 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t58dp\" (UniqueName: \"kubernetes.io/projected/10fad4bf-1fb3-4455-a349-fefb7f585c30-kube-api-access-t58dp\") pod \"10fad4bf-1fb3-4455-a349-fefb7f585c30\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.028012 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-proxy-ca-bundles\") pod \"10fad4bf-1fb3-4455-a349-fefb7f585c30\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.028697 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-config\") pod \"10fad4bf-1fb3-4455-a349-fefb7f585c30\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.029140 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-client-ca" (OuterVolumeSpecName: "client-ca") pod "10fad4bf-1fb3-4455-a349-fefb7f585c30" (UID: "10fad4bf-1fb3-4455-a349-fefb7f585c30"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.029217 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "10fad4bf-1fb3-4455-a349-fefb7f585c30" (UID: "10fad4bf-1fb3-4455-a349-fefb7f585c30"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.029232 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-config" (OuterVolumeSpecName: "config") pod "10fad4bf-1fb3-4455-a349-fefb7f585c30" (UID: "10fad4bf-1fb3-4455-a349-fefb7f585c30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.032915 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10fad4bf-1fb3-4455-a349-fefb7f585c30-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "10fad4bf-1fb3-4455-a349-fefb7f585c30" (UID: "10fad4bf-1fb3-4455-a349-fefb7f585c30"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.034178 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10fad4bf-1fb3-4455-a349-fefb7f585c30-kube-api-access-t58dp" (OuterVolumeSpecName: "kube-api-access-t58dp") pod "10fad4bf-1fb3-4455-a349-fefb7f585c30" (UID: "10fad4bf-1fb3-4455-a349-fefb7f585c30"). InnerVolumeSpecName "kube-api-access-t58dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.048014 4835 scope.go:117] "RemoveContainer" containerID="6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba" Feb 01 07:28:16 crc kubenswrapper[4835]: E0201 07:28:16.048542 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba\": container with ID starting with 6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba not found: ID does not exist" containerID="6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.048600 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba"} err="failed to get container status \"6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba\": rpc error: code = NotFound desc = could not find container \"6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba\": container with ID starting with 6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba not found: ID does not exist" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.130367 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10fad4bf-1fb3-4455-a349-fefb7f585c30-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.130428 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.130440 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t58dp\" (UniqueName: \"kubernetes.io/projected/10fad4bf-1fb3-4455-a349-fefb7f585c30-kube-api-access-t58dp\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.130449 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.130459 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.360097 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84fd975466-sxqz2"] Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.366585 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84fd975466-sxqz2"] Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.107818 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7767cd8d75-j5r8c"] Feb 01 07:28:17 crc kubenswrapper[4835]: E0201 07:28:17.108236 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fad4bf-1fb3-4455-a349-fefb7f585c30" containerName="controller-manager" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.108252 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fad4bf-1fb3-4455-a349-fefb7f585c30" containerName="controller-manager" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.108363 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="10fad4bf-1fb3-4455-a349-fefb7f585c30" containerName="controller-manager" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.108862 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.110837 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.111117 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.111484 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.114828 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.114905 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.115021 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.121771 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.133866 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7767cd8d75-j5r8c"] Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.255351 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-proxy-ca-bundles\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.255912 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-client-ca\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.255993 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-serving-cert\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.256070 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-config\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.256178 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5vrz\" (UniqueName: \"kubernetes.io/projected/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-kube-api-access-w5vrz\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.357561 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-client-ca\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.357615 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-serving-cert\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.357653 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-config\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.357726 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5vrz\" (UniqueName: \"kubernetes.io/projected/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-kube-api-access-w5vrz\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.357767 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-proxy-ca-bundles\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.358930 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-client-ca\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.359034 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-proxy-ca-bundles\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.359661 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-config\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.362996 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-serving-cert\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.374990 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5vrz\" (UniqueName: \"kubernetes.io/projected/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-kube-api-access-w5vrz\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.427234 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.578316 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10fad4bf-1fb3-4455-a349-fefb7f585c30" path="/var/lib/kubelet/pods/10fad4bf-1fb3-4455-a349-fefb7f585c30/volumes" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.618690 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7767cd8d75-j5r8c"] Feb 01 07:28:17 crc kubenswrapper[4835]: W0201 07:28:17.628443 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dfc9814_856f_4e2b_ac49_32b78a2d0b7c.slice/crio-410d7c0289a65fc1f7a68d20d31e1eda02fcc1ac6cc33c3da8a4f8d1bdf75734 WatchSource:0}: Error finding container 410d7c0289a65fc1f7a68d20d31e1eda02fcc1ac6cc33c3da8a4f8d1bdf75734: Status 404 returned error can't find the container with id 410d7c0289a65fc1f7a68d20d31e1eda02fcc1ac6cc33c3da8a4f8d1bdf75734 Feb 01 07:28:18 crc kubenswrapper[4835]: I0201 07:28:18.026825 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" event={"ID":"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c","Type":"ContainerStarted","Data":"0a89d58a187e594dd324d0508e36cee170d5705151a3f1083249f57f47db8f94"} Feb 01 07:28:18 crc kubenswrapper[4835]: I0201 07:28:18.027208 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" event={"ID":"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c","Type":"ContainerStarted","Data":"410d7c0289a65fc1f7a68d20d31e1eda02fcc1ac6cc33c3da8a4f8d1bdf75734"} Feb 01 07:28:18 crc kubenswrapper[4835]: I0201 07:28:18.027613 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:18 crc kubenswrapper[4835]: I0201 07:28:18.047238 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:18 crc kubenswrapper[4835]: I0201 07:28:18.049679 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" podStartSLOduration=3.04965885 podStartE2EDuration="3.04965885s" podCreationTimestamp="2026-02-01 07:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:28:18.045893289 +0000 UTC m=+371.166329723" watchObservedRunningTime="2026-02-01 07:28:18.04965885 +0000 UTC m=+371.170095284" Feb 01 07:28:18 crc kubenswrapper[4835]: I0201 07:28:18.762348 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:28:18 crc kubenswrapper[4835]: I0201 07:28:18.833957 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-66fqg"] Feb 01 07:28:25 crc kubenswrapper[4835]: I0201 07:28:25.192512 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:28:25 crc kubenswrapper[4835]: I0201 07:28:25.193146 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:28:43 crc kubenswrapper[4835]: I0201 07:28:43.893050 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" podUID="ac521dca-2154-40bb-bbdb-a22e3d6abd72" containerName="registry" containerID="cri-o://3f33f19419e62411bac7a2082cf36c839014695310e5de008fdbd44a3e0eba81" gracePeriod=30 Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.201826 4835 generic.go:334] "Generic (PLEG): container finished" podID="ac521dca-2154-40bb-bbdb-a22e3d6abd72" containerID="3f33f19419e62411bac7a2082cf36c839014695310e5de008fdbd44a3e0eba81" exitCode=0 Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.201964 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" event={"ID":"ac521dca-2154-40bb-bbdb-a22e3d6abd72","Type":"ContainerDied","Data":"3f33f19419e62411bac7a2082cf36c839014695310e5de008fdbd44a3e0eba81"} Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.435658 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.624488 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-certificates\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.624558 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-tls\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.624615 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-bound-sa-token\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.624667 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac521dca-2154-40bb-bbdb-a22e3d6abd72-ca-trust-extracted\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.624893 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.624940 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-trusted-ca\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.625034 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac521dca-2154-40bb-bbdb-a22e3d6abd72-installation-pull-secrets\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.625126 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7bnj\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-kube-api-access-w7bnj\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.626290 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.626315 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.627383 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.627732 4835 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.631440 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.637047 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac521dca-2154-40bb-bbdb-a22e3d6abd72-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.637956 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.638034 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-kube-api-access-w7bnj" (OuterVolumeSpecName: "kube-api-access-w7bnj") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "kube-api-access-w7bnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.642070 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.659574 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac521dca-2154-40bb-bbdb-a22e3d6abd72-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.731043 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.731955 4835 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac521dca-2154-40bb-bbdb-a22e3d6abd72-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.731977 4835 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac521dca-2154-40bb-bbdb-a22e3d6abd72-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.731997 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7bnj\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-kube-api-access-w7bnj\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.732017 4835 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:45 crc kubenswrapper[4835]: I0201 07:28:45.211727 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" event={"ID":"ac521dca-2154-40bb-bbdb-a22e3d6abd72","Type":"ContainerDied","Data":"7009647035bcb9b3d9a9385f910f574abe92ca7bc6f2836a8743b47eb765ed4a"} Feb 01 07:28:45 crc kubenswrapper[4835]: I0201 07:28:45.211784 4835 scope.go:117] "RemoveContainer" containerID="3f33f19419e62411bac7a2082cf36c839014695310e5de008fdbd44a3e0eba81" Feb 01 07:28:45 crc kubenswrapper[4835]: I0201 07:28:45.211834 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:28:45 crc kubenswrapper[4835]: I0201 07:28:45.263550 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-66fqg"] Feb 01 07:28:45 crc kubenswrapper[4835]: I0201 07:28:45.270830 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-66fqg"] Feb 01 07:28:45 crc kubenswrapper[4835]: I0201 07:28:45.578813 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac521dca-2154-40bb-bbdb-a22e3d6abd72" path="/var/lib/kubelet/pods/ac521dca-2154-40bb-bbdb-a22e3d6abd72/volumes" Feb 01 07:28:55 crc kubenswrapper[4835]: I0201 07:28:55.191614 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:28:55 crc kubenswrapper[4835]: I0201 07:28:55.192369 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.192286 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.193053 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.193125 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.194179 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e3104eb77be3b50140e525cdfbf7f55a456b28fd34136df6dc0b6920b3a97bf"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.194297 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://9e3104eb77be3b50140e525cdfbf7f55a456b28fd34136df6dc0b6920b3a97bf" gracePeriod=600 Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.487759 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="9e3104eb77be3b50140e525cdfbf7f55a456b28fd34136df6dc0b6920b3a97bf" exitCode=0 Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.487927 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"9e3104eb77be3b50140e525cdfbf7f55a456b28fd34136df6dc0b6920b3a97bf"} Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.488105 4835 scope.go:117] "RemoveContainer" containerID="b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5" Feb 01 07:29:26 crc kubenswrapper[4835]: I0201 07:29:26.497504 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"377901096f8562233e3d8083b0c24e7e0a643028b79ddd39edcc7cb8ec54319f"} Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.208157 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z"] Feb 01 07:30:00 crc kubenswrapper[4835]: E0201 07:30:00.209603 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac521dca-2154-40bb-bbdb-a22e3d6abd72" containerName="registry" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.209661 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac521dca-2154-40bb-bbdb-a22e3d6abd72" containerName="registry" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.209907 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac521dca-2154-40bb-bbdb-a22e3d6abd72" containerName="registry" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.211166 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.214039 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.214201 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.216880 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z"] Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.245909 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-secret-volume\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.246002 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h79cd\" (UniqueName: \"kubernetes.io/projected/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-kube-api-access-h79cd\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.246052 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-config-volume\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.346996 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-secret-volume\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.347062 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h79cd\" (UniqueName: \"kubernetes.io/projected/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-kube-api-access-h79cd\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.347100 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-config-volume\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.348247 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-config-volume\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.361100 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-secret-volume\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.379713 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h79cd\" (UniqueName: \"kubernetes.io/projected/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-kube-api-access-h79cd\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.540094 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:01 crc kubenswrapper[4835]: I0201 07:30:01.029108 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z"] Feb 01 07:30:01 crc kubenswrapper[4835]: I0201 07:30:01.741509 4835 generic.go:334] "Generic (PLEG): container finished" podID="2a3f2951-1c06-484a-9c2e-502d2adaa6cd" containerID="6a8f3f0f8045324c04ea0f25d07e785228bc538f428f47c8c77a96101a2d3e96" exitCode=0 Feb 01 07:30:01 crc kubenswrapper[4835]: I0201 07:30:01.741550 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" event={"ID":"2a3f2951-1c06-484a-9c2e-502d2adaa6cd","Type":"ContainerDied","Data":"6a8f3f0f8045324c04ea0f25d07e785228bc538f428f47c8c77a96101a2d3e96"} Feb 01 07:30:01 crc kubenswrapper[4835]: I0201 07:30:01.741573 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" event={"ID":"2a3f2951-1c06-484a-9c2e-502d2adaa6cd","Type":"ContainerStarted","Data":"5bf646b8fc2ede47108ed327acdc22c029c65c6b2d07abd2b9f281fee0ab2314"} Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.099569 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.208870 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h79cd\" (UniqueName: \"kubernetes.io/projected/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-kube-api-access-h79cd\") pod \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.208943 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-config-volume\") pod \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.209084 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-secret-volume\") pod \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.210957 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a3f2951-1c06-484a-9c2e-502d2adaa6cd" (UID: "2a3f2951-1c06-484a-9c2e-502d2adaa6cd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.219453 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-kube-api-access-h79cd" (OuterVolumeSpecName: "kube-api-access-h79cd") pod "2a3f2951-1c06-484a-9c2e-502d2adaa6cd" (UID: "2a3f2951-1c06-484a-9c2e-502d2adaa6cd"). InnerVolumeSpecName "kube-api-access-h79cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.220403 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a3f2951-1c06-484a-9c2e-502d2adaa6cd" (UID: "2a3f2951-1c06-484a-9c2e-502d2adaa6cd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.310286 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h79cd\" (UniqueName: \"kubernetes.io/projected/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-kube-api-access-h79cd\") on node \"crc\" DevicePath \"\"" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.310717 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.310731 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.757456 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" event={"ID":"2a3f2951-1c06-484a-9c2e-502d2adaa6cd","Type":"ContainerDied","Data":"5bf646b8fc2ede47108ed327acdc22c029c65c6b2d07abd2b9f281fee0ab2314"} Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.757511 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bf646b8fc2ede47108ed327acdc22c029c65c6b2d07abd2b9f281fee0ab2314" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.757520 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:31:25 crc kubenswrapper[4835]: I0201 07:31:25.192022 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:31:25 crc kubenswrapper[4835]: I0201 07:31:25.193935 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.266030 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5z5dl"] Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.268130 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-controller" containerID="cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.268272 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="northd" containerID="cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.268247 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="sbdb" containerID="cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.268170 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="nbdb" containerID="cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.268436 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-node" containerID="cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.268357 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-acl-logging" containerID="cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.268358 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.374336 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" containerID="cri-o://a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.622193 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/3.log" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.624378 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovn-acl-logging/0.log" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.624809 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovn-controller/0.log" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.625264 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673149 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mdtv2"] Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673376 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673392 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673406 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673430 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673442 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="northd" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673449 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="northd" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673461 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kubecfg-setup" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673468 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kubecfg-setup" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673482 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-node" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673489 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-node" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673500 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673507 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673515 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673522 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673531 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3f2951-1c06-484a-9c2e-502d2adaa6cd" containerName="collect-profiles" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673539 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3f2951-1c06-484a-9c2e-502d2adaa6cd" containerName="collect-profiles" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673552 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-ovn-metrics" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673559 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-ovn-metrics" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673568 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-acl-logging" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673574 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-acl-logging" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673584 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="nbdb" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673592 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="nbdb" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673601 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="sbdb" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673608 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="sbdb" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673618 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673625 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673723 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="northd" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673738 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673747 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="nbdb" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673755 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673765 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673772 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673782 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673790 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-ovn-metrics" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673800 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3f2951-1c06-484a-9c2e-502d2adaa6cd" containerName="collect-profiles" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673808 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="sbdb" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673816 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-acl-logging" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673824 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-node" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673927 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673936 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.675626 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.677827 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.685688 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-env-overrides\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.685783 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovn-node-metrics-cert\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.685832 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-log-socket\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.685878 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-systemd-units\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.685944 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-bin\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.685973 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-log-socket" (OuterVolumeSpecName: "log-socket") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686004 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-ovn\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686015 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686039 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686060 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-etc-openvswitch\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686108 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686115 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686144 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-kubelet\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686167 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686200 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-config\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686215 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686254 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-ovn-kubernetes\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686299 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-var-lib-openvswitch\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686323 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686371 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-netd\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686438 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686451 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-slash\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686476 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686501 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-script-lib\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686551 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686606 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-netns\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686661 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-openvswitch\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686698 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-systemd\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686761 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x78ft\" (UniqueName: \"kubernetes.io/projected/bd62f19b-07ab-4cc5-84a3-2f097c278de7-kube-api-access-x78ft\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686800 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-node-log\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686512 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-slash" (OuterVolumeSpecName: "host-slash") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686803 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686805 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686825 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687025 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687086 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-node-log" (OuterVolumeSpecName: "node-log") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687226 4835 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687256 4835 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-node-log\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687280 4835 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687342 4835 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-log-socket\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687369 4835 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687391 4835 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687443 4835 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687468 4835 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687495 4835 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687521 4835 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687547 4835 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687570 4835 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687714 4835 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-slash\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687743 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687825 4835 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687850 4835 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687734 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.692604 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd62f19b-07ab-4cc5-84a3-2f097c278de7-kube-api-access-x78ft" (OuterVolumeSpecName: "kube-api-access-x78ft") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "kube-api-access-x78ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.692771 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.717880 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.788702 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovnkube-config\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.788752 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-slash\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.788775 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-log-socket\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.788805 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-cni-netd\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.788827 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-run-netns\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.788955 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovnkube-script-lib\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789107 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovn-node-metrics-cert\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789189 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-var-lib-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789224 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-kubelet\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789268 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-ovn\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789294 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-run-ovn-kubernetes\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789320 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-cni-bin\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789347 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-etc-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789429 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-env-overrides\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789484 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-systemd-units\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789531 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mjv9\" (UniqueName: \"kubernetes.io/projected/d5ddf04a-df44-470d-bed4-da3b619f9bf9-kube-api-access-4mjv9\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789570 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789612 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-node-log\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789663 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-systemd\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789702 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789778 4835 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789802 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x78ft\" (UniqueName: \"kubernetes.io/projected/bd62f19b-07ab-4cc5-84a3-2f097c278de7-kube-api-access-x78ft\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789820 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789875 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.890962 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-env-overrides\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891051 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-systemd-units\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891111 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mjv9\" (UniqueName: \"kubernetes.io/projected/d5ddf04a-df44-470d-bed4-da3b619f9bf9-kube-api-access-4mjv9\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891151 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891189 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-node-log\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891234 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-systemd\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891264 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891256 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-systemd-units\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891302 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovnkube-config\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891466 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-slash\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891515 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-log-socket\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891616 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-cni-netd\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891642 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-node-log\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891676 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-run-netns\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891687 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891718 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-cni-netd\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891742 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-log-socket\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891746 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-systemd\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891767 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891615 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-slash\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891792 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-run-netns\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891821 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovnkube-script-lib\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891870 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovn-node-metrics-cert\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891890 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-var-lib-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891907 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-kubelet\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891923 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-ovn\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891936 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-run-ovn-kubernetes\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891950 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-cni-bin\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891965 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-etc-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892007 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-etc-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892026 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-env-overrides\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892121 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-ovn\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892181 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-kubelet\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892179 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-var-lib-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892237 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-run-ovn-kubernetes\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892254 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-cni-bin\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892611 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovnkube-config\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.893121 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovnkube-script-lib\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.897091 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovn-node-metrics-cert\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.925630 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mjv9\" (UniqueName: \"kubernetes.io/projected/d5ddf04a-df44-470d-bed4-da3b619f9bf9-kube-api-access-4mjv9\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.990457 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.375118 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/2.log" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.376692 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/1.log" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.376793 4835 generic.go:334] "Generic (PLEG): container finished" podID="c9342eb7-b5ae-47b2-a56d-91ae886e5f0e" containerID="bc898c375e02b77f5d0608257a9dc49631ac50c8ceab7e6be8a7327889f64c22" exitCode=2 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.376893 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerDied","Data":"bc898c375e02b77f5d0608257a9dc49631ac50c8ceab7e6be8a7327889f64c22"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.376960 4835 scope.go:117] "RemoveContainer" containerID="c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.377553 4835 scope.go:117] "RemoveContainer" containerID="bc898c375e02b77f5d0608257a9dc49631ac50c8ceab7e6be8a7327889f64c22" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.377852 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-25s9j_openshift-multus(c9342eb7-b5ae-47b2-a56d-91ae886e5f0e)\"" pod="openshift-multus/multus-25s9j" podUID="c9342eb7-b5ae-47b2-a56d-91ae886e5f0e" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.380517 4835 generic.go:334] "Generic (PLEG): container finished" podID="d5ddf04a-df44-470d-bed4-da3b619f9bf9" containerID="24f5512aa6b4417e804a55252efc5ac2377797792510fedd6d27d314b906fe74" exitCode=0 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.380593 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerDied","Data":"24f5512aa6b4417e804a55252efc5ac2377797792510fedd6d27d314b906fe74"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.380636 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"b5514c320c794ac078eefc8f358925be6b5d029b1381514b07ff5668586492a3"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.383851 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/3.log" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.390584 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovn-acl-logging/0.log" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.391912 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovn-controller/0.log" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393012 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" exitCode=0 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393067 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" exitCode=0 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393080 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" exitCode=0 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393095 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" exitCode=0 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393105 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" exitCode=0 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393147 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" exitCode=0 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393159 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" exitCode=143 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393172 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" exitCode=143 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393226 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393264 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393315 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393337 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393352 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393394 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393457 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393472 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393482 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393491 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393500 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393620 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393631 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393639 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393647 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393801 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393817 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393834 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393844 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393853 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393863 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393872 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393880 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393890 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393898 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393956 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394017 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394033 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394130 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394149 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394160 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394230 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394239 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394247 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394255 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394263 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394271 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394279 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394293 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"f2c33318aecd4d2a27c36deae504704dd76ecedc9768925c3ee036665f4c99e8"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394310 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394320 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394328 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394337 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394346 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394381 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394391 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394399 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394445 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394459 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394650 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.436696 4835 scope.go:117] "RemoveContainer" containerID="a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.492691 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5z5dl"] Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.494613 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.496552 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5z5dl"] Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.536692 4835 scope.go:117] "RemoveContainer" containerID="85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.558058 4835 scope.go:117] "RemoveContainer" containerID="0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.581752 4835 scope.go:117] "RemoveContainer" containerID="c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.585772 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" path="/var/lib/kubelet/pods/bd62f19b-07ab-4cc5-84a3-2f097c278de7/volumes" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.618980 4835 scope.go:117] "RemoveContainer" containerID="1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.642595 4835 scope.go:117] "RemoveContainer" containerID="044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.671542 4835 scope.go:117] "RemoveContainer" containerID="03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.690602 4835 scope.go:117] "RemoveContainer" containerID="8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.721082 4835 scope.go:117] "RemoveContainer" containerID="b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.739035 4835 scope.go:117] "RemoveContainer" containerID="a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.741967 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": container with ID starting with a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca not found: ID does not exist" containerID="a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.742014 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} err="failed to get container status \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": rpc error: code = NotFound desc = could not find container \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": container with ID starting with a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.742040 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.742447 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": container with ID starting with 9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe not found: ID does not exist" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.742483 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} err="failed to get container status \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": rpc error: code = NotFound desc = could not find container \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": container with ID starting with 9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.742504 4835 scope.go:117] "RemoveContainer" containerID="85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.742849 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": container with ID starting with 85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227 not found: ID does not exist" containerID="85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.742883 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} err="failed to get container status \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": rpc error: code = NotFound desc = could not find container \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": container with ID starting with 85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.742941 4835 scope.go:117] "RemoveContainer" containerID="0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.743325 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": container with ID starting with 0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514 not found: ID does not exist" containerID="0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.743349 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} err="failed to get container status \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": rpc error: code = NotFound desc = could not find container \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": container with ID starting with 0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.743366 4835 scope.go:117] "RemoveContainer" containerID="c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.743595 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": container with ID starting with c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4 not found: ID does not exist" containerID="c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.743618 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} err="failed to get container status \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": rpc error: code = NotFound desc = could not find container \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": container with ID starting with c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.743632 4835 scope.go:117] "RemoveContainer" containerID="1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.743820 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": container with ID starting with 1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc not found: ID does not exist" containerID="1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.743845 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} err="failed to get container status \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": rpc error: code = NotFound desc = could not find container \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": container with ID starting with 1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.743859 4835 scope.go:117] "RemoveContainer" containerID="044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.744092 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": container with ID starting with 044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84 not found: ID does not exist" containerID="044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.744112 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} err="failed to get container status \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": rpc error: code = NotFound desc = could not find container \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": container with ID starting with 044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.744125 4835 scope.go:117] "RemoveContainer" containerID="03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.744518 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": container with ID starting with 03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc not found: ID does not exist" containerID="03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.744542 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} err="failed to get container status \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": rpc error: code = NotFound desc = could not find container \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": container with ID starting with 03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.744556 4835 scope.go:117] "RemoveContainer" containerID="8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.744798 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": container with ID starting with 8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc not found: ID does not exist" containerID="8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.744819 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} err="failed to get container status \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": rpc error: code = NotFound desc = could not find container \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": container with ID starting with 8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.744831 4835 scope.go:117] "RemoveContainer" containerID="b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.745126 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": container with ID starting with b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764 not found: ID does not exist" containerID="b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.745151 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} err="failed to get container status \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": rpc error: code = NotFound desc = could not find container \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": container with ID starting with b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.745188 4835 scope.go:117] "RemoveContainer" containerID="a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.745477 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} err="failed to get container status \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": rpc error: code = NotFound desc = could not find container \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": container with ID starting with a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.745496 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.745814 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} err="failed to get container status \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": rpc error: code = NotFound desc = could not find container \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": container with ID starting with 9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.745836 4835 scope.go:117] "RemoveContainer" containerID="85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746107 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} err="failed to get container status \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": rpc error: code = NotFound desc = could not find container \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": container with ID starting with 85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746126 4835 scope.go:117] "RemoveContainer" containerID="0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746371 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} err="failed to get container status \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": rpc error: code = NotFound desc = could not find container \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": container with ID starting with 0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746391 4835 scope.go:117] "RemoveContainer" containerID="c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746695 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} err="failed to get container status \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": rpc error: code = NotFound desc = could not find container \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": container with ID starting with c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746709 4835 scope.go:117] "RemoveContainer" containerID="1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746893 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} err="failed to get container status \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": rpc error: code = NotFound desc = could not find container \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": container with ID starting with 1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746911 4835 scope.go:117] "RemoveContainer" containerID="044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747119 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} err="failed to get container status \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": rpc error: code = NotFound desc = could not find container \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": container with ID starting with 044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747140 4835 scope.go:117] "RemoveContainer" containerID="03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747347 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} err="failed to get container status \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": rpc error: code = NotFound desc = could not find container \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": container with ID starting with 03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747372 4835 scope.go:117] "RemoveContainer" containerID="8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747669 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} err="failed to get container status \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": rpc error: code = NotFound desc = could not find container \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": container with ID starting with 8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747712 4835 scope.go:117] "RemoveContainer" containerID="b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747901 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} err="failed to get container status \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": rpc error: code = NotFound desc = could not find container \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": container with ID starting with b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747915 4835 scope.go:117] "RemoveContainer" containerID="a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748115 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} err="failed to get container status \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": rpc error: code = NotFound desc = could not find container \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": container with ID starting with a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748135 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748363 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} err="failed to get container status \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": rpc error: code = NotFound desc = could not find container \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": container with ID starting with 9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748382 4835 scope.go:117] "RemoveContainer" containerID="85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748574 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} err="failed to get container status \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": rpc error: code = NotFound desc = could not find container \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": container with ID starting with 85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748592 4835 scope.go:117] "RemoveContainer" containerID="0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748778 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} err="failed to get container status \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": rpc error: code = NotFound desc = could not find container \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": container with ID starting with 0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748796 4835 scope.go:117] "RemoveContainer" containerID="c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749054 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} err="failed to get container status \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": rpc error: code = NotFound desc = could not find container \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": container with ID starting with c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749083 4835 scope.go:117] "RemoveContainer" containerID="1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749362 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} err="failed to get container status \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": rpc error: code = NotFound desc = could not find container \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": container with ID starting with 1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749397 4835 scope.go:117] "RemoveContainer" containerID="044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749719 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} err="failed to get container status \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": rpc error: code = NotFound desc = could not find container \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": container with ID starting with 044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749779 4835 scope.go:117] "RemoveContainer" containerID="03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749977 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} err="failed to get container status \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": rpc error: code = NotFound desc = could not find container \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": container with ID starting with 03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749997 4835 scope.go:117] "RemoveContainer" containerID="8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750230 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} err="failed to get container status \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": rpc error: code = NotFound desc = could not find container \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": container with ID starting with 8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750250 4835 scope.go:117] "RemoveContainer" containerID="b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750495 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} err="failed to get container status \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": rpc error: code = NotFound desc = could not find container \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": container with ID starting with b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750511 4835 scope.go:117] "RemoveContainer" containerID="a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750696 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} err="failed to get container status \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": rpc error: code = NotFound desc = could not find container \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": container with ID starting with a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750716 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750969 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} err="failed to get container status \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": rpc error: code = NotFound desc = could not find container \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": container with ID starting with 9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750986 4835 scope.go:117] "RemoveContainer" containerID="85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.751157 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} err="failed to get container status \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": rpc error: code = NotFound desc = could not find container \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": container with ID starting with 85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.751173 4835 scope.go:117] "RemoveContainer" containerID="0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.751348 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} err="failed to get container status \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": rpc error: code = NotFound desc = could not find container \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": container with ID starting with 0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.751372 4835 scope.go:117] "RemoveContainer" containerID="c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.751680 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} err="failed to get container status \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": rpc error: code = NotFound desc = could not find container \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": container with ID starting with c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.751699 4835 scope.go:117] "RemoveContainer" containerID="1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752082 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} err="failed to get container status \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": rpc error: code = NotFound desc = could not find container \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": container with ID starting with 1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752131 4835 scope.go:117] "RemoveContainer" containerID="044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752435 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} err="failed to get container status \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": rpc error: code = NotFound desc = could not find container \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": container with ID starting with 044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752478 4835 scope.go:117] "RemoveContainer" containerID="03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752721 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} err="failed to get container status \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": rpc error: code = NotFound desc = could not find container \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": container with ID starting with 03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752742 4835 scope.go:117] "RemoveContainer" containerID="8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752934 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} err="failed to get container status \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": rpc error: code = NotFound desc = could not find container \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": container with ID starting with 8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752951 4835 scope.go:117] "RemoveContainer" containerID="b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.753101 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} err="failed to get container status \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": rpc error: code = NotFound desc = could not find container \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": container with ID starting with b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764 not found: ID does not exist" Feb 01 07:31:32 crc kubenswrapper[4835]: I0201 07:31:32.402191 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/2.log" Feb 01 07:31:32 crc kubenswrapper[4835]: I0201 07:31:32.407832 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"dbcbc256b0f131ca2b6c1cfbd3aeb5371427e85f319782dbf393cc93bb2fd2b0"} Feb 01 07:31:32 crc kubenswrapper[4835]: I0201 07:31:32.407889 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"5db52c4bd2827eddeba3b09b09546d9709dd5e26cdac6cf311a8ec574d561439"} Feb 01 07:31:32 crc kubenswrapper[4835]: I0201 07:31:32.407904 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"785f885832ef7f890099766870311dc6a1c4249a24ff3f7f2ee9f620842f97db"} Feb 01 07:31:32 crc kubenswrapper[4835]: I0201 07:31:32.407918 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"829756ac4125eb06d368e1d99fa1ee2ed9484a5e3089823b4f99bacb2042fdd8"} Feb 01 07:31:32 crc kubenswrapper[4835]: I0201 07:31:32.407929 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"992415a416ad4d1e420259ed20c17ccaec6977a78fb9806ab8897b66bb75a925"} Feb 01 07:31:32 crc kubenswrapper[4835]: I0201 07:31:32.407950 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"8097a2ea0075ffa8d20b9fbea73d6acb0b6cc54d195f0443940f2c15f56ab527"} Feb 01 07:31:35 crc kubenswrapper[4835]: I0201 07:31:35.434492 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"46207584995c980d15104b73ad0655c7051ed75ec1f99cd69ecfecfc739afab4"} Feb 01 07:31:37 crc kubenswrapper[4835]: I0201 07:31:37.451881 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"1774ab46905b3b12b974f1840568e297f715ac691a55523ad947ef04623faaa9"} Feb 01 07:31:37 crc kubenswrapper[4835]: I0201 07:31:37.453386 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:37 crc kubenswrapper[4835]: I0201 07:31:37.453580 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:37 crc kubenswrapper[4835]: I0201 07:31:37.453740 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:37 crc kubenswrapper[4835]: I0201 07:31:37.485693 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:37 crc kubenswrapper[4835]: I0201 07:31:37.486072 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:37 crc kubenswrapper[4835]: I0201 07:31:37.495612 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" podStartSLOduration=7.49558774 podStartE2EDuration="7.49558774s" podCreationTimestamp="2026-02-01 07:31:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:31:37.490374581 +0000 UTC m=+570.610811065" watchObservedRunningTime="2026-02-01 07:31:37.49558774 +0000 UTC m=+570.616024204" Feb 01 07:31:43 crc kubenswrapper[4835]: I0201 07:31:43.567073 4835 scope.go:117] "RemoveContainer" containerID="bc898c375e02b77f5d0608257a9dc49631ac50c8ceab7e6be8a7327889f64c22" Feb 01 07:31:43 crc kubenswrapper[4835]: E0201 07:31:43.568041 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-25s9j_openshift-multus(c9342eb7-b5ae-47b2-a56d-91ae886e5f0e)\"" pod="openshift-multus/multus-25s9j" podUID="c9342eb7-b5ae-47b2-a56d-91ae886e5f0e" Feb 01 07:31:55 crc kubenswrapper[4835]: I0201 07:31:55.191938 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:31:55 crc kubenswrapper[4835]: I0201 07:31:55.192657 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.481716 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g"] Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.482949 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.486655 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.501249 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g"] Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.572169 4835 scope.go:117] "RemoveContainer" containerID="bc898c375e02b77f5d0608257a9dc49631ac50c8ceab7e6be8a7327889f64c22" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.605458 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.605525 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwgjr\" (UniqueName: \"kubernetes.io/projected/042bee18-1826-42db-a17a-6f0e3d488c16-kube-api-access-kwgjr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.605632 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.707676 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.707895 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.707979 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwgjr\" (UniqueName: \"kubernetes.io/projected/042bee18-1826-42db-a17a-6f0e3d488c16-kube-api-access-kwgjr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.709644 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.709643 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.744402 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwgjr\" (UniqueName: \"kubernetes.io/projected/042bee18-1826-42db-a17a-6f0e3d488c16-kube-api-access-kwgjr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.809799 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: E0201 07:31:57.853257 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(8c164d2b9babb42a33b20baec3bb11a79bea7669d08edffccc2ed4dd179c8b68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 01 07:31:57 crc kubenswrapper[4835]: E0201 07:31:57.853351 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(8c164d2b9babb42a33b20baec3bb11a79bea7669d08edffccc2ed4dd179c8b68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: E0201 07:31:57.853388 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(8c164d2b9babb42a33b20baec3bb11a79bea7669d08edffccc2ed4dd179c8b68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: E0201 07:31:57.853515 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace(042bee18-1826-42db-a17a-6f0e3d488c16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace(042bee18-1826-42db-a17a-6f0e3d488c16)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(8c164d2b9babb42a33b20baec3bb11a79bea7669d08edffccc2ed4dd179c8b68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" Feb 01 07:31:58 crc kubenswrapper[4835]: I0201 07:31:58.592739 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/2.log" Feb 01 07:31:58 crc kubenswrapper[4835]: I0201 07:31:58.593460 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:58 crc kubenswrapper[4835]: I0201 07:31:58.593485 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerStarted","Data":"0a2144e34183d71af06e054153405ef8fcb42063704ecebf25092a89df054ed9"} Feb 01 07:31:58 crc kubenswrapper[4835]: I0201 07:31:58.594038 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:58 crc kubenswrapper[4835]: E0201 07:31:58.634712 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(86ee98b0a66bb1edcf7e0f987ca19d666ab87d4e2386933d95ee45d0b69b9e95): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 01 07:31:58 crc kubenswrapper[4835]: E0201 07:31:58.634820 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(86ee98b0a66bb1edcf7e0f987ca19d666ab87d4e2386933d95ee45d0b69b9e95): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:58 crc kubenswrapper[4835]: E0201 07:31:58.634871 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(86ee98b0a66bb1edcf7e0f987ca19d666ab87d4e2386933d95ee45d0b69b9e95): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:58 crc kubenswrapper[4835]: E0201 07:31:58.634972 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace(042bee18-1826-42db-a17a-6f0e3d488c16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace(042bee18-1826-42db-a17a-6f0e3d488c16)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(86ee98b0a66bb1edcf7e0f987ca19d666ab87d4e2386933d95ee45d0b69b9e95): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" Feb 01 07:32:01 crc kubenswrapper[4835]: I0201 07:32:01.030016 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:32:09 crc kubenswrapper[4835]: I0201 07:32:09.566471 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:32:09 crc kubenswrapper[4835]: I0201 07:32:09.567542 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:32:09 crc kubenswrapper[4835]: I0201 07:32:09.847838 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g"] Feb 01 07:32:10 crc kubenswrapper[4835]: I0201 07:32:10.677832 4835 generic.go:334] "Generic (PLEG): container finished" podID="042bee18-1826-42db-a17a-6f0e3d488c16" containerID="8062e953dae2e28ddc40103a506983b616b949425b693d1e6aa423ddac541f1b" exitCode=0 Feb 01 07:32:10 crc kubenswrapper[4835]: I0201 07:32:10.677963 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" event={"ID":"042bee18-1826-42db-a17a-6f0e3d488c16","Type":"ContainerDied","Data":"8062e953dae2e28ddc40103a506983b616b949425b693d1e6aa423ddac541f1b"} Feb 01 07:32:10 crc kubenswrapper[4835]: I0201 07:32:10.678022 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" event={"ID":"042bee18-1826-42db-a17a-6f0e3d488c16","Type":"ContainerStarted","Data":"a3c47286c8b6f7000a88e99dc173b33b188c1b566b7c054b5310da55f59ac601"} Feb 01 07:32:10 crc kubenswrapper[4835]: I0201 07:32:10.681660 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 07:32:12 crc kubenswrapper[4835]: I0201 07:32:12.695746 4835 generic.go:334] "Generic (PLEG): container finished" podID="042bee18-1826-42db-a17a-6f0e3d488c16" containerID="5f0f8ee150508c7ea5a48ce98e50eb6eee3f894d53028cc3bde304a59ac17ca5" exitCode=0 Feb 01 07:32:12 crc kubenswrapper[4835]: I0201 07:32:12.695812 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" event={"ID":"042bee18-1826-42db-a17a-6f0e3d488c16","Type":"ContainerDied","Data":"5f0f8ee150508c7ea5a48ce98e50eb6eee3f894d53028cc3bde304a59ac17ca5"} Feb 01 07:32:13 crc kubenswrapper[4835]: I0201 07:32:13.705131 4835 generic.go:334] "Generic (PLEG): container finished" podID="042bee18-1826-42db-a17a-6f0e3d488c16" containerID="b705539489355a2f4f704e6a327bf087ad33a74a620a6d9e0ac64ad131705044" exitCode=0 Feb 01 07:32:13 crc kubenswrapper[4835]: I0201 07:32:13.705314 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" event={"ID":"042bee18-1826-42db-a17a-6f0e3d488c16","Type":"ContainerDied","Data":"b705539489355a2f4f704e6a327bf087ad33a74a620a6d9e0ac64ad131705044"} Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.010180 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.070084 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-util\") pod \"042bee18-1826-42db-a17a-6f0e3d488c16\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.070157 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwgjr\" (UniqueName: \"kubernetes.io/projected/042bee18-1826-42db-a17a-6f0e3d488c16-kube-api-access-kwgjr\") pod \"042bee18-1826-42db-a17a-6f0e3d488c16\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.070309 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-bundle\") pod \"042bee18-1826-42db-a17a-6f0e3d488c16\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.071920 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-bundle" (OuterVolumeSpecName: "bundle") pod "042bee18-1826-42db-a17a-6f0e3d488c16" (UID: "042bee18-1826-42db-a17a-6f0e3d488c16"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.080588 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042bee18-1826-42db-a17a-6f0e3d488c16-kube-api-access-kwgjr" (OuterVolumeSpecName: "kube-api-access-kwgjr") pod "042bee18-1826-42db-a17a-6f0e3d488c16" (UID: "042bee18-1826-42db-a17a-6f0e3d488c16"). InnerVolumeSpecName "kube-api-access-kwgjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.099657 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-util" (OuterVolumeSpecName: "util") pod "042bee18-1826-42db-a17a-6f0e3d488c16" (UID: "042bee18-1826-42db-a17a-6f0e3d488c16"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.171587 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-util\") on node \"crc\" DevicePath \"\"" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.171646 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwgjr\" (UniqueName: \"kubernetes.io/projected/042bee18-1826-42db-a17a-6f0e3d488c16-kube-api-access-kwgjr\") on node \"crc\" DevicePath \"\"" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.171672 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.723008 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" event={"ID":"042bee18-1826-42db-a17a-6f0e3d488c16","Type":"ContainerDied","Data":"a3c47286c8b6f7000a88e99dc173b33b188c1b566b7c054b5310da55f59ac601"} Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.723067 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3c47286c8b6f7000a88e99dc173b33b188c1b566b7c054b5310da55f59ac601" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.723086 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.191492 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.192002 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.192051 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.192673 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"377901096f8562233e3d8083b0c24e7e0a643028b79ddd39edcc7cb8ec54319f"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.192738 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://377901096f8562233e3d8083b0c24e7e0a643028b79ddd39edcc7cb8ec54319f" gracePeriod=600 Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.805202 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="377901096f8562233e3d8083b0c24e7e0a643028b79ddd39edcc7cb8ec54319f" exitCode=0 Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.805275 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"377901096f8562233e3d8083b0c24e7e0a643028b79ddd39edcc7cb8ec54319f"} Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.805564 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"6da4a09917e14a43c6af10d69dcc7ba3d2cd41146e8c294ea85744f0374d0efa"} Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.805592 4835 scope.go:117] "RemoveContainer" containerID="9e3104eb77be3b50140e525cdfbf7f55a456b28fd34136df6dc0b6920b3a97bf" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.809436 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h"] Feb 01 07:32:25 crc kubenswrapper[4835]: E0201 07:32:25.809626 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" containerName="extract" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.809641 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" containerName="extract" Feb 01 07:32:25 crc kubenswrapper[4835]: E0201 07:32:25.809658 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" containerName="util" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.809664 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" containerName="util" Feb 01 07:32:25 crc kubenswrapper[4835]: E0201 07:32:25.809673 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" containerName="pull" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.809680 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" containerName="pull" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.809761 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" containerName="extract" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.810103 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.813862 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-r8t9q" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.814026 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.814054 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.814050 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.815756 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.837681 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h"] Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.917649 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91863ede-5184-40d2-8fba-1f65d6fdc785-webhook-cert\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.917890 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91863ede-5184-40d2-8fba-1f65d6fdc785-apiservice-cert\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.917984 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxxms\" (UniqueName: \"kubernetes.io/projected/91863ede-5184-40d2-8fba-1f65d6fdc785-kube-api-access-lxxms\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.018909 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91863ede-5184-40d2-8fba-1f65d6fdc785-webhook-cert\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.018962 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91863ede-5184-40d2-8fba-1f65d6fdc785-apiservice-cert\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.018983 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxxms\" (UniqueName: \"kubernetes.io/projected/91863ede-5184-40d2-8fba-1f65d6fdc785-kube-api-access-lxxms\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.025625 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91863ede-5184-40d2-8fba-1f65d6fdc785-apiservice-cert\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.026103 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91863ede-5184-40d2-8fba-1f65d6fdc785-webhook-cert\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.035273 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxxms\" (UniqueName: \"kubernetes.io/projected/91863ede-5184-40d2-8fba-1f65d6fdc785-kube-api-access-lxxms\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.124642 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.155802 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr"] Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.156978 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.158590 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.158864 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-6jkfp" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.158968 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.173639 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr"] Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.221151 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2ca8e92-ef3f-442a-830f-0e3c37d76087-apiservice-cert\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.221220 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2ca8e92-ef3f-442a-830f-0e3c37d76087-webhook-cert\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.221248 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flf84\" (UniqueName: \"kubernetes.io/projected/c2ca8e92-ef3f-442a-830f-0e3c37d76087-kube-api-access-flf84\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.322473 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2ca8e92-ef3f-442a-830f-0e3c37d76087-webhook-cert\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.322805 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flf84\" (UniqueName: \"kubernetes.io/projected/c2ca8e92-ef3f-442a-830f-0e3c37d76087-kube-api-access-flf84\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.322868 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2ca8e92-ef3f-442a-830f-0e3c37d76087-apiservice-cert\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.339167 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2ca8e92-ef3f-442a-830f-0e3c37d76087-webhook-cert\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.348005 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2ca8e92-ef3f-442a-830f-0e3c37d76087-apiservice-cert\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.352270 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flf84\" (UniqueName: \"kubernetes.io/projected/c2ca8e92-ef3f-442a-830f-0e3c37d76087-kube-api-access-flf84\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.392075 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h"] Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.480512 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.722119 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr"] Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.812056 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" event={"ID":"91863ede-5184-40d2-8fba-1f65d6fdc785","Type":"ContainerStarted","Data":"749e11ec6a06ab913e302bac7c95c2bd78a90ba2132a58a5f523b3faeed645ed"} Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.813559 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" event={"ID":"c2ca8e92-ef3f-442a-830f-0e3c37d76087","Type":"ContainerStarted","Data":"0405ca3062806c7e1e799311c5ac6630b116cf60c9877cf71dea5a7c2a963084"} Feb 01 07:32:31 crc kubenswrapper[4835]: I0201 07:32:31.849500 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" event={"ID":"91863ede-5184-40d2-8fba-1f65d6fdc785","Type":"ContainerStarted","Data":"de0a34f9af6fb1363b01e39d80935961f1b0d06a629c554a2510d72a174cf948"} Feb 01 07:32:31 crc kubenswrapper[4835]: I0201 07:32:31.850265 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:31 crc kubenswrapper[4835]: I0201 07:32:31.851728 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" event={"ID":"c2ca8e92-ef3f-442a-830f-0e3c37d76087","Type":"ContainerStarted","Data":"aea9915ceaf5281d79bf1513c8113c0b5e034a909f0658df3b2b5f50721bc21d"} Feb 01 07:32:31 crc kubenswrapper[4835]: I0201 07:32:31.851914 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:31 crc kubenswrapper[4835]: I0201 07:32:31.883634 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" podStartSLOduration=2.626444964 podStartE2EDuration="6.883615391s" podCreationTimestamp="2026-02-01 07:32:25 +0000 UTC" firstStartedPulling="2026-02-01 07:32:26.402546891 +0000 UTC m=+619.522983325" lastFinishedPulling="2026-02-01 07:32:30.659717318 +0000 UTC m=+623.780153752" observedRunningTime="2026-02-01 07:32:31.879577152 +0000 UTC m=+625.000013606" watchObservedRunningTime="2026-02-01 07:32:31.883615391 +0000 UTC m=+625.004051835" Feb 01 07:32:31 crc kubenswrapper[4835]: I0201 07:32:31.913065 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" podStartSLOduration=1.912356607 podStartE2EDuration="5.913036121s" podCreationTimestamp="2026-02-01 07:32:26 +0000 UTC" firstStartedPulling="2026-02-01 07:32:26.7303108 +0000 UTC m=+619.850747234" lastFinishedPulling="2026-02-01 07:32:30.730990314 +0000 UTC m=+623.851426748" observedRunningTime="2026-02-01 07:32:31.907153361 +0000 UTC m=+625.027589795" watchObservedRunningTime="2026-02-01 07:32:31.913036121 +0000 UTC m=+625.033472595" Feb 01 07:32:46 crc kubenswrapper[4835]: I0201 07:32:46.486202 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:33:06 crc kubenswrapper[4835]: I0201 07:33:06.128291 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.018398 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd"] Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.019501 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.021763 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.021825 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8q2cn" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.023602 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9qwwp"] Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.026172 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.031810 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.037743 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.041512 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd"] Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.145703 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-8s85p"] Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.146476 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.148743 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.148788 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.149136 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-slwrd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.149360 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.167904 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-metrics\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.167984 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rd96\" (UniqueName: \"kubernetes.io/projected/5c427241-76d6-4772-9a78-74952bdbf29f-kube-api-access-7rd96\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.168007 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5c427241-76d6-4772-9a78-74952bdbf29f-frr-startup\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.168028 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-reloader\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.168045 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-frr-sockets\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.168065 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx8kd\" (UniqueName: \"kubernetes.io/projected/e60f3db5-acc8-404c-a98c-6e6bfb05d6e9-kube-api-access-fx8kd\") pod \"frr-k8s-webhook-server-7df86c4f6c-7ldwd\" (UID: \"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.168139 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e60f3db5-acc8-404c-a98c-6e6bfb05d6e9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7ldwd\" (UID: \"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.168161 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c427241-76d6-4772-9a78-74952bdbf29f-metrics-certs\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.168211 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-frr-conf\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.176229 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-6qvjg"] Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.177183 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.178795 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.204032 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6qvjg"] Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269237 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-metrics\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269290 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rd96\" (UniqueName: \"kubernetes.io/projected/5c427241-76d6-4772-9a78-74952bdbf29f-kube-api-access-7rd96\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269315 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5c427241-76d6-4772-9a78-74952bdbf29f-frr-startup\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269351 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgcsp\" (UniqueName: \"kubernetes.io/projected/0975cec6-f6ff-4188-9435-864a46ad1740-kube-api-access-dgcsp\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269380 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-reloader\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269403 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86105024-7ff9-4d38-9333-c7c7b241a5c5-cert\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269450 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-frr-sockets\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269479 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx8kd\" (UniqueName: \"kubernetes.io/projected/e60f3db5-acc8-404c-a98c-6e6bfb05d6e9-kube-api-access-fx8kd\") pod \"frr-k8s-webhook-server-7df86c4f6c-7ldwd\" (UID: \"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269506 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86105024-7ff9-4d38-9333-c7c7b241a5c5-metrics-certs\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269531 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e60f3db5-acc8-404c-a98c-6e6bfb05d6e9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7ldwd\" (UID: \"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269566 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c427241-76d6-4772-9a78-74952bdbf29f-metrics-certs\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269592 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-memberlist\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269628 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-frr-conf\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269656 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxzk8\" (UniqueName: \"kubernetes.io/projected/86105024-7ff9-4d38-9333-c7c7b241a5c5-kube-api-access-pxzk8\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269669 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-metrics\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269679 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0975cec6-f6ff-4188-9435-864a46ad1740-metallb-excludel2\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269730 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-metrics-certs\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269843 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-frr-sockets\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.270622 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5c427241-76d6-4772-9a78-74952bdbf29f-frr-startup\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.270719 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-reloader\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.270905 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-frr-conf\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.276994 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e60f3db5-acc8-404c-a98c-6e6bfb05d6e9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7ldwd\" (UID: \"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.284221 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c427241-76d6-4772-9a78-74952bdbf29f-metrics-certs\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.288229 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rd96\" (UniqueName: \"kubernetes.io/projected/5c427241-76d6-4772-9a78-74952bdbf29f-kube-api-access-7rd96\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.292942 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx8kd\" (UniqueName: \"kubernetes.io/projected/e60f3db5-acc8-404c-a98c-6e6bfb05d6e9-kube-api-access-fx8kd\") pod \"frr-k8s-webhook-server-7df86c4f6c-7ldwd\" (UID: \"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.334851 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.342020 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.371389 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-memberlist\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.371484 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxzk8\" (UniqueName: \"kubernetes.io/projected/86105024-7ff9-4d38-9333-c7c7b241a5c5-kube-api-access-pxzk8\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.371507 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0975cec6-f6ff-4188-9435-864a46ad1740-metallb-excludel2\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.371527 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-metrics-certs\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.371577 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgcsp\" (UniqueName: \"kubernetes.io/projected/0975cec6-f6ff-4188-9435-864a46ad1740-kube-api-access-dgcsp\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.371597 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86105024-7ff9-4d38-9333-c7c7b241a5c5-cert\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.371634 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86105024-7ff9-4d38-9333-c7c7b241a5c5-metrics-certs\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: E0201 07:33:07.373165 4835 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 01 07:33:07 crc kubenswrapper[4835]: E0201 07:33:07.373272 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-memberlist podName:0975cec6-f6ff-4188-9435-864a46ad1740 nodeName:}" failed. No retries permitted until 2026-02-01 07:33:07.873239695 +0000 UTC m=+660.993676299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-memberlist") pod "speaker-8s85p" (UID: "0975cec6-f6ff-4188-9435-864a46ad1740") : secret "metallb-memberlist" not found Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.373525 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0975cec6-f6ff-4188-9435-864a46ad1740-metallb-excludel2\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.377630 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86105024-7ff9-4d38-9333-c7c7b241a5c5-metrics-certs\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.378516 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-metrics-certs\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.378635 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86105024-7ff9-4d38-9333-c7c7b241a5c5-cert\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.399994 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgcsp\" (UniqueName: \"kubernetes.io/projected/0975cec6-f6ff-4188-9435-864a46ad1740-kube-api-access-dgcsp\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.404819 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxzk8\" (UniqueName: \"kubernetes.io/projected/86105024-7ff9-4d38-9333-c7c7b241a5c5-kube-api-access-pxzk8\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.487900 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.666436 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6qvjg"] Feb 01 07:33:07 crc kubenswrapper[4835]: W0201 07:33:07.669823 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86105024_7ff9_4d38_9333_c7c7b241a5c5.slice/crio-297023993fd8efda54f32b5742c11a160069b98d2c71e82c938e7f26e3c8154b WatchSource:0}: Error finding container 297023993fd8efda54f32b5742c11a160069b98d2c71e82c938e7f26e3c8154b: Status 404 returned error can't find the container with id 297023993fd8efda54f32b5742c11a160069b98d2c71e82c938e7f26e3c8154b Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.763783 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd"] Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.880675 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-memberlist\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.886427 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-memberlist\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:08 crc kubenswrapper[4835]: I0201 07:33:08.059457 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-slwrd" Feb 01 07:33:08 crc kubenswrapper[4835]: I0201 07:33:08.068039 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8s85p" Feb 01 07:33:08 crc kubenswrapper[4835]: W0201 07:33:08.090901 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0975cec6_f6ff_4188_9435_864a46ad1740.slice/crio-4a7beeec794c6c028445dfe36a16c3e43c4745798f4a72bdf0b49d0af3ca7cb0 WatchSource:0}: Error finding container 4a7beeec794c6c028445dfe36a16c3e43c4745798f4a72bdf0b49d0af3ca7cb0: Status 404 returned error can't find the container with id 4a7beeec794c6c028445dfe36a16c3e43c4745798f4a72bdf0b49d0af3ca7cb0 Feb 01 07:33:08 crc kubenswrapper[4835]: I0201 07:33:08.094631 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" event={"ID":"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9","Type":"ContainerStarted","Data":"9d9385f2e885cffa4fe19e9729f9bc25f27390211bcfdfecc7fd90c06f3b8303"} Feb 01 07:33:08 crc kubenswrapper[4835]: I0201 07:33:08.097903 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerStarted","Data":"f6f645f3d29c3449925d625d4603c7ea0521e86dedf703651a60b4af08826d92"} Feb 01 07:33:08 crc kubenswrapper[4835]: I0201 07:33:08.099523 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6qvjg" event={"ID":"86105024-7ff9-4d38-9333-c7c7b241a5c5","Type":"ContainerStarted","Data":"b53313c7fa50e96458d12a9e819206063efa1fe75cbc78cbc89f17303f5db3e3"} Feb 01 07:33:08 crc kubenswrapper[4835]: I0201 07:33:08.099576 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6qvjg" event={"ID":"86105024-7ff9-4d38-9333-c7c7b241a5c5","Type":"ContainerStarted","Data":"297023993fd8efda54f32b5742c11a160069b98d2c71e82c938e7f26e3c8154b"} Feb 01 07:33:09 crc kubenswrapper[4835]: I0201 07:33:09.109185 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8s85p" event={"ID":"0975cec6-f6ff-4188-9435-864a46ad1740","Type":"ContainerStarted","Data":"b57eb59308467a79265de6ed788edb65d0142c6ead6841ade0aebb1e017c44f2"} Feb 01 07:33:09 crc kubenswrapper[4835]: I0201 07:33:09.109450 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8s85p" event={"ID":"0975cec6-f6ff-4188-9435-864a46ad1740","Type":"ContainerStarted","Data":"4a7beeec794c6c028445dfe36a16c3e43c4745798f4a72bdf0b49d0af3ca7cb0"} Feb 01 07:33:11 crc kubenswrapper[4835]: I0201 07:33:11.127244 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6qvjg" event={"ID":"86105024-7ff9-4d38-9333-c7c7b241a5c5","Type":"ContainerStarted","Data":"8148c97c2d1729a4cfd2254f790d50ec52fd9652729b74e77be590c2d57dd1f3"} Feb 01 07:33:11 crc kubenswrapper[4835]: I0201 07:33:11.127584 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:11 crc kubenswrapper[4835]: I0201 07:33:11.143914 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-6qvjg" podStartSLOduration=0.968947836 podStartE2EDuration="4.143901369s" podCreationTimestamp="2026-02-01 07:33:07 +0000 UTC" firstStartedPulling="2026-02-01 07:33:07.788020905 +0000 UTC m=+660.908457339" lastFinishedPulling="2026-02-01 07:33:10.962974448 +0000 UTC m=+664.083410872" observedRunningTime="2026-02-01 07:33:11.142922663 +0000 UTC m=+664.263359097" watchObservedRunningTime="2026-02-01 07:33:11.143901369 +0000 UTC m=+664.264337803" Feb 01 07:33:12 crc kubenswrapper[4835]: I0201 07:33:12.135541 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8s85p" event={"ID":"0975cec6-f6ff-4188-9435-864a46ad1740","Type":"ContainerStarted","Data":"79e951890140423fbd8b935f2b9b4f36f5756e0fb078824c64f854a64728379a"} Feb 01 07:33:12 crc kubenswrapper[4835]: I0201 07:33:12.157245 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-8s85p" podStartSLOduration=2.587147172 podStartE2EDuration="5.157221146s" podCreationTimestamp="2026-02-01 07:33:07 +0000 UTC" firstStartedPulling="2026-02-01 07:33:08.407783328 +0000 UTC m=+661.528219762" lastFinishedPulling="2026-02-01 07:33:10.977857302 +0000 UTC m=+664.098293736" observedRunningTime="2026-02-01 07:33:12.154858992 +0000 UTC m=+665.275295446" watchObservedRunningTime="2026-02-01 07:33:12.157221146 +0000 UTC m=+665.277657620" Feb 01 07:33:13 crc kubenswrapper[4835]: I0201 07:33:13.145063 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8s85p" Feb 01 07:33:15 crc kubenswrapper[4835]: I0201 07:33:15.161388 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" event={"ID":"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9","Type":"ContainerStarted","Data":"cebedd70c405f420fec8624fd4c4e8d3a8b0db318d764ee54418869a78b7f5e4"} Feb 01 07:33:15 crc kubenswrapper[4835]: I0201 07:33:15.161784 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:15 crc kubenswrapper[4835]: I0201 07:33:15.164851 4835 generic.go:334] "Generic (PLEG): container finished" podID="5c427241-76d6-4772-9a78-74952bdbf29f" containerID="a19737494a47edebf6868e3c306147e7c20c29f6a80c67a14d325fc4f60be064" exitCode=0 Feb 01 07:33:15 crc kubenswrapper[4835]: I0201 07:33:15.164885 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerDied","Data":"a19737494a47edebf6868e3c306147e7c20c29f6a80c67a14d325fc4f60be064"} Feb 01 07:33:15 crc kubenswrapper[4835]: I0201 07:33:15.180939 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" podStartSLOduration=2.039366727 podStartE2EDuration="9.180902764s" podCreationTimestamp="2026-02-01 07:33:06 +0000 UTC" firstStartedPulling="2026-02-01 07:33:07.772099622 +0000 UTC m=+660.892536066" lastFinishedPulling="2026-02-01 07:33:14.913635669 +0000 UTC m=+668.034072103" observedRunningTime="2026-02-01 07:33:15.174618623 +0000 UTC m=+668.295055087" watchObservedRunningTime="2026-02-01 07:33:15.180902764 +0000 UTC m=+668.301339198" Feb 01 07:33:16 crc kubenswrapper[4835]: I0201 07:33:16.175104 4835 generic.go:334] "Generic (PLEG): container finished" podID="5c427241-76d6-4772-9a78-74952bdbf29f" containerID="b6d97e1daf21a6d088e16a3e02a68d02b41da503e6a62e7f75630fb90021aed6" exitCode=0 Feb 01 07:33:16 crc kubenswrapper[4835]: I0201 07:33:16.175179 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerDied","Data":"b6d97e1daf21a6d088e16a3e02a68d02b41da503e6a62e7f75630fb90021aed6"} Feb 01 07:33:17 crc kubenswrapper[4835]: I0201 07:33:17.187606 4835 generic.go:334] "Generic (PLEG): container finished" podID="5c427241-76d6-4772-9a78-74952bdbf29f" containerID="d547d804da139e781b2c16d8b6d467b3b9a1ef30ed3fe075c1449461552996f7" exitCode=0 Feb 01 07:33:17 crc kubenswrapper[4835]: I0201 07:33:17.188973 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerDied","Data":"d547d804da139e781b2c16d8b6d467b3b9a1ef30ed3fe075c1449461552996f7"} Feb 01 07:33:17 crc kubenswrapper[4835]: I0201 07:33:17.493032 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:18 crc kubenswrapper[4835]: I0201 07:33:18.075226 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8s85p" Feb 01 07:33:18 crc kubenswrapper[4835]: I0201 07:33:18.206658 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerStarted","Data":"a2438333ce775d050aad366fecaf228466e901d4e42424978fab513975eadf3e"} Feb 01 07:33:18 crc kubenswrapper[4835]: I0201 07:33:18.206723 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerStarted","Data":"daa16a6f56699489a3a1b1ebf3bd69cc828b106e8ee3107596d2d703c3092557"} Feb 01 07:33:18 crc kubenswrapper[4835]: I0201 07:33:18.206742 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerStarted","Data":"66c3b1765c8a2e71ac7865f7dc096a6cad463f913d5e8a1200cb9427588e60f0"} Feb 01 07:33:18 crc kubenswrapper[4835]: I0201 07:33:18.206761 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerStarted","Data":"fe2cb13a282b5bb779c06aac825d62a4c75423592a3808b0bd9214dfd07f7a25"} Feb 01 07:33:18 crc kubenswrapper[4835]: I0201 07:33:18.206779 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerStarted","Data":"43d75cd2de4c4faec6c988b23db3ccd12b8a8a40e5647b179769b83777565381"} Feb 01 07:33:19 crc kubenswrapper[4835]: I0201 07:33:19.223481 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerStarted","Data":"687956705a3f872719e9b399d17c14bb3f91db40d7ee0a52ad4e94a4fdb8f033"} Feb 01 07:33:19 crc kubenswrapper[4835]: I0201 07:33:19.223809 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:19 crc kubenswrapper[4835]: I0201 07:33:19.266354 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9qwwp" podStartSLOduration=5.909598306 podStartE2EDuration="13.266321253s" podCreationTimestamp="2026-02-01 07:33:06 +0000 UTC" firstStartedPulling="2026-02-01 07:33:07.52075569 +0000 UTC m=+660.641192144" lastFinishedPulling="2026-02-01 07:33:14.877478647 +0000 UTC m=+667.997915091" observedRunningTime="2026-02-01 07:33:19.258290945 +0000 UTC m=+672.378727469" watchObservedRunningTime="2026-02-01 07:33:19.266321253 +0000 UTC m=+672.386757757" Feb 01 07:33:22 crc kubenswrapper[4835]: I0201 07:33:22.343105 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:22 crc kubenswrapper[4835]: I0201 07:33:22.375267 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.677471 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-vzd94"] Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.678634 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-vzd94" Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.684048 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.685312 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-wcczr" Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.685310 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.690035 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-vzd94"] Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.775987 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfndf\" (UniqueName: \"kubernetes.io/projected/61daec47-a8bc-4ead-9f76-fcf5fca43147-kube-api-access-rfndf\") pod \"mariadb-operator-index-vzd94\" (UID: \"61daec47-a8bc-4ead-9f76-fcf5fca43147\") " pod="openstack-operators/mariadb-operator-index-vzd94" Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.877508 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfndf\" (UniqueName: \"kubernetes.io/projected/61daec47-a8bc-4ead-9f76-fcf5fca43147-kube-api-access-rfndf\") pod \"mariadb-operator-index-vzd94\" (UID: \"61daec47-a8bc-4ead-9f76-fcf5fca43147\") " pod="openstack-operators/mariadb-operator-index-vzd94" Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.899404 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfndf\" (UniqueName: \"kubernetes.io/projected/61daec47-a8bc-4ead-9f76-fcf5fca43147-kube-api-access-rfndf\") pod \"mariadb-operator-index-vzd94\" (UID: \"61daec47-a8bc-4ead-9f76-fcf5fca43147\") " pod="openstack-operators/mariadb-operator-index-vzd94" Feb 01 07:33:24 crc kubenswrapper[4835]: I0201 07:33:24.010142 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-vzd94" Feb 01 07:33:24 crc kubenswrapper[4835]: I0201 07:33:24.518110 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-vzd94"] Feb 01 07:33:24 crc kubenswrapper[4835]: W0201 07:33:24.527744 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61daec47_a8bc_4ead_9f76_fcf5fca43147.slice/crio-e228621a4370f2d9433fb96f0949b6e19537d374c81dc6d6a0e3bc6eb82fd894 WatchSource:0}: Error finding container e228621a4370f2d9433fb96f0949b6e19537d374c81dc6d6a0e3bc6eb82fd894: Status 404 returned error can't find the container with id e228621a4370f2d9433fb96f0949b6e19537d374c81dc6d6a0e3bc6eb82fd894 Feb 01 07:33:25 crc kubenswrapper[4835]: I0201 07:33:25.274776 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-vzd94" event={"ID":"61daec47-a8bc-4ead-9f76-fcf5fca43147","Type":"ContainerStarted","Data":"e228621a4370f2d9433fb96f0949b6e19537d374c81dc6d6a0e3bc6eb82fd894"} Feb 01 07:33:26 crc kubenswrapper[4835]: I0201 07:33:26.288556 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-vzd94" event={"ID":"61daec47-a8bc-4ead-9f76-fcf5fca43147","Type":"ContainerStarted","Data":"7fe581ac06c8d9bbe9a6da9878e81a9db49b53afc871b3268c9a15241b9c7d55"} Feb 01 07:33:26 crc kubenswrapper[4835]: I0201 07:33:26.317863 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-vzd94" podStartSLOduration=2.319151936 podStartE2EDuration="3.317834226s" podCreationTimestamp="2026-02-01 07:33:23 +0000 UTC" firstStartedPulling="2026-02-01 07:33:24.5313109 +0000 UTC m=+677.651747354" lastFinishedPulling="2026-02-01 07:33:25.52999321 +0000 UTC m=+678.650429644" observedRunningTime="2026-02-01 07:33:26.31282196 +0000 UTC m=+679.433258434" watchObservedRunningTime="2026-02-01 07:33:26.317834226 +0000 UTC m=+679.438270700" Feb 01 07:33:26 crc kubenswrapper[4835]: I0201 07:33:26.850396 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-vzd94"] Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.344877 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.347385 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.467600 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-hgssn"] Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.468372 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.476170 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-hgssn"] Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.541762 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tqjr\" (UniqueName: \"kubernetes.io/projected/bc494048-8b2c-4d2e-925e-8b1b779dab89-kube-api-access-8tqjr\") pod \"mariadb-operator-index-hgssn\" (UID: \"bc494048-8b2c-4d2e-925e-8b1b779dab89\") " pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.644159 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tqjr\" (UniqueName: \"kubernetes.io/projected/bc494048-8b2c-4d2e-925e-8b1b779dab89-kube-api-access-8tqjr\") pod \"mariadb-operator-index-hgssn\" (UID: \"bc494048-8b2c-4d2e-925e-8b1b779dab89\") " pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.669457 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tqjr\" (UniqueName: \"kubernetes.io/projected/bc494048-8b2c-4d2e-925e-8b1b779dab89-kube-api-access-8tqjr\") pod \"mariadb-operator-index-hgssn\" (UID: \"bc494048-8b2c-4d2e-925e-8b1b779dab89\") " pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.781469 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:28 crc kubenswrapper[4835]: I0201 07:33:28.195395 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-hgssn"] Feb 01 07:33:28 crc kubenswrapper[4835]: I0201 07:33:28.302863 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-hgssn" event={"ID":"bc494048-8b2c-4d2e-925e-8b1b779dab89","Type":"ContainerStarted","Data":"095297a208669352587746dd3adcf3244e16253140f3b49a138f639f2322d82a"} Feb 01 07:33:28 crc kubenswrapper[4835]: I0201 07:33:28.303069 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-vzd94" podUID="61daec47-a8bc-4ead-9f76-fcf5fca43147" containerName="registry-server" containerID="cri-o://7fe581ac06c8d9bbe9a6da9878e81a9db49b53afc871b3268c9a15241b9c7d55" gracePeriod=2 Feb 01 07:33:29 crc kubenswrapper[4835]: I0201 07:33:29.344200 4835 generic.go:334] "Generic (PLEG): container finished" podID="61daec47-a8bc-4ead-9f76-fcf5fca43147" containerID="7fe581ac06c8d9bbe9a6da9878e81a9db49b53afc871b3268c9a15241b9c7d55" exitCode=0 Feb 01 07:33:29 crc kubenswrapper[4835]: I0201 07:33:29.344266 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-vzd94" event={"ID":"61daec47-a8bc-4ead-9f76-fcf5fca43147","Type":"ContainerDied","Data":"7fe581ac06c8d9bbe9a6da9878e81a9db49b53afc871b3268c9a15241b9c7d55"} Feb 01 07:33:29 crc kubenswrapper[4835]: I0201 07:33:29.549606 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-vzd94" Feb 01 07:33:29 crc kubenswrapper[4835]: I0201 07:33:29.570404 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfndf\" (UniqueName: \"kubernetes.io/projected/61daec47-a8bc-4ead-9f76-fcf5fca43147-kube-api-access-rfndf\") pod \"61daec47-a8bc-4ead-9f76-fcf5fca43147\" (UID: \"61daec47-a8bc-4ead-9f76-fcf5fca43147\") " Feb 01 07:33:29 crc kubenswrapper[4835]: I0201 07:33:29.588643 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61daec47-a8bc-4ead-9f76-fcf5fca43147-kube-api-access-rfndf" (OuterVolumeSpecName: "kube-api-access-rfndf") pod "61daec47-a8bc-4ead-9f76-fcf5fca43147" (UID: "61daec47-a8bc-4ead-9f76-fcf5fca43147"). InnerVolumeSpecName "kube-api-access-rfndf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:33:29 crc kubenswrapper[4835]: I0201 07:33:29.672808 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfndf\" (UniqueName: \"kubernetes.io/projected/61daec47-a8bc-4ead-9f76-fcf5fca43147-kube-api-access-rfndf\") on node \"crc\" DevicePath \"\"" Feb 01 07:33:30 crc kubenswrapper[4835]: I0201 07:33:30.350346 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-vzd94" event={"ID":"61daec47-a8bc-4ead-9f76-fcf5fca43147","Type":"ContainerDied","Data":"e228621a4370f2d9433fb96f0949b6e19537d374c81dc6d6a0e3bc6eb82fd894"} Feb 01 07:33:30 crc kubenswrapper[4835]: I0201 07:33:30.350396 4835 scope.go:117] "RemoveContainer" containerID="7fe581ac06c8d9bbe9a6da9878e81a9db49b53afc871b3268c9a15241b9c7d55" Feb 01 07:33:30 crc kubenswrapper[4835]: I0201 07:33:30.350500 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-vzd94" Feb 01 07:33:30 crc kubenswrapper[4835]: I0201 07:33:30.356190 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-hgssn" event={"ID":"bc494048-8b2c-4d2e-925e-8b1b779dab89","Type":"ContainerStarted","Data":"00535660be0470f57ec6a455366d6aae9b9f2d8d7e55f5991b7f07020dd58c09"} Feb 01 07:33:30 crc kubenswrapper[4835]: I0201 07:33:30.383253 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-hgssn" podStartSLOduration=2.363307875 podStartE2EDuration="3.383232941s" podCreationTimestamp="2026-02-01 07:33:27 +0000 UTC" firstStartedPulling="2026-02-01 07:33:28.2141109 +0000 UTC m=+681.334547334" lastFinishedPulling="2026-02-01 07:33:29.234035956 +0000 UTC m=+682.354472400" observedRunningTime="2026-02-01 07:33:30.380712913 +0000 UTC m=+683.501149387" watchObservedRunningTime="2026-02-01 07:33:30.383232941 +0000 UTC m=+683.503669375" Feb 01 07:33:30 crc kubenswrapper[4835]: I0201 07:33:30.397952 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-vzd94"] Feb 01 07:33:30 crc kubenswrapper[4835]: I0201 07:33:30.408549 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-vzd94"] Feb 01 07:33:31 crc kubenswrapper[4835]: I0201 07:33:31.575526 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61daec47-a8bc-4ead-9f76-fcf5fca43147" path="/var/lib/kubelet/pods/61daec47-a8bc-4ead-9f76-fcf5fca43147/volumes" Feb 01 07:33:37 crc kubenswrapper[4835]: I0201 07:33:37.781722 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:37 crc kubenswrapper[4835]: I0201 07:33:37.782455 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:37 crc kubenswrapper[4835]: I0201 07:33:37.839792 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:38 crc kubenswrapper[4835]: I0201 07:33:38.466499 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.819088 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d"] Feb 01 07:33:42 crc kubenswrapper[4835]: E0201 07:33:42.819797 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61daec47-a8bc-4ead-9f76-fcf5fca43147" containerName="registry-server" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.819820 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="61daec47-a8bc-4ead-9f76-fcf5fca43147" containerName="registry-server" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.820060 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="61daec47-a8bc-4ead-9f76-fcf5fca43147" containerName="registry-server" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.821386 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.823887 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j4xxm" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.840563 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d"] Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.854275 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.854336 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.854677 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z74gg\" (UniqueName: \"kubernetes.io/projected/147369ac-5553-4aa7-944b-878065951228-kube-api-access-z74gg\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.955836 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z74gg\" (UniqueName: \"kubernetes.io/projected/147369ac-5553-4aa7-944b-878065951228-kube-api-access-z74gg\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.955937 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.956011 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.956608 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.956727 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.991841 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z74gg\" (UniqueName: \"kubernetes.io/projected/147369ac-5553-4aa7-944b-878065951228-kube-api-access-z74gg\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:43 crc kubenswrapper[4835]: I0201 07:33:43.151848 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:43 crc kubenswrapper[4835]: I0201 07:33:43.701728 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d"] Feb 01 07:33:43 crc kubenswrapper[4835]: W0201 07:33:43.723921 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod147369ac_5553_4aa7_944b_878065951228.slice/crio-d573702509e39abe1f6be9b25d45427b6341b4d0d68e7744d35c16256ab074ec WatchSource:0}: Error finding container d573702509e39abe1f6be9b25d45427b6341b4d0d68e7744d35c16256ab074ec: Status 404 returned error can't find the container with id d573702509e39abe1f6be9b25d45427b6341b4d0d68e7744d35c16256ab074ec Feb 01 07:33:44 crc kubenswrapper[4835]: I0201 07:33:44.473271 4835 generic.go:334] "Generic (PLEG): container finished" podID="147369ac-5553-4aa7-944b-878065951228" containerID="4c04c0aadb0582b3c423a84a41aff698e1c915ae6ed84c5785dce5be5bc1aae5" exitCode=0 Feb 01 07:33:44 crc kubenswrapper[4835]: I0201 07:33:44.473395 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" event={"ID":"147369ac-5553-4aa7-944b-878065951228","Type":"ContainerDied","Data":"4c04c0aadb0582b3c423a84a41aff698e1c915ae6ed84c5785dce5be5bc1aae5"} Feb 01 07:33:44 crc kubenswrapper[4835]: I0201 07:33:44.473722 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" event={"ID":"147369ac-5553-4aa7-944b-878065951228","Type":"ContainerStarted","Data":"d573702509e39abe1f6be9b25d45427b6341b4d0d68e7744d35c16256ab074ec"} Feb 01 07:33:45 crc kubenswrapper[4835]: I0201 07:33:45.484602 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" event={"ID":"147369ac-5553-4aa7-944b-878065951228","Type":"ContainerStarted","Data":"a5b34c3269d9077c1868416ef9788156afaffaf20ad86ae96b120830561501d3"} Feb 01 07:33:46 crc kubenswrapper[4835]: I0201 07:33:46.494954 4835 generic.go:334] "Generic (PLEG): container finished" podID="147369ac-5553-4aa7-944b-878065951228" containerID="a5b34c3269d9077c1868416ef9788156afaffaf20ad86ae96b120830561501d3" exitCode=0 Feb 01 07:33:46 crc kubenswrapper[4835]: I0201 07:33:46.495038 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" event={"ID":"147369ac-5553-4aa7-944b-878065951228","Type":"ContainerDied","Data":"a5b34c3269d9077c1868416ef9788156afaffaf20ad86ae96b120830561501d3"} Feb 01 07:33:47 crc kubenswrapper[4835]: I0201 07:33:47.506049 4835 generic.go:334] "Generic (PLEG): container finished" podID="147369ac-5553-4aa7-944b-878065951228" containerID="f91f806086054776a6bb00c5c22c3d8c35dd533d1f5bd6037500d74cf0533b06" exitCode=0 Feb 01 07:33:47 crc kubenswrapper[4835]: I0201 07:33:47.506229 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" event={"ID":"147369ac-5553-4aa7-944b-878065951228","Type":"ContainerDied","Data":"f91f806086054776a6bb00c5c22c3d8c35dd533d1f5bd6037500d74cf0533b06"} Feb 01 07:33:48 crc kubenswrapper[4835]: I0201 07:33:48.781085 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:48 crc kubenswrapper[4835]: I0201 07:33:48.942895 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-util\") pod \"147369ac-5553-4aa7-944b-878065951228\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " Feb 01 07:33:48 crc kubenswrapper[4835]: I0201 07:33:48.943021 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-bundle\") pod \"147369ac-5553-4aa7-944b-878065951228\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " Feb 01 07:33:48 crc kubenswrapper[4835]: I0201 07:33:48.943097 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z74gg\" (UniqueName: \"kubernetes.io/projected/147369ac-5553-4aa7-944b-878065951228-kube-api-access-z74gg\") pod \"147369ac-5553-4aa7-944b-878065951228\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " Feb 01 07:33:48 crc kubenswrapper[4835]: I0201 07:33:48.944572 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-bundle" (OuterVolumeSpecName: "bundle") pod "147369ac-5553-4aa7-944b-878065951228" (UID: "147369ac-5553-4aa7-944b-878065951228"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:33:48 crc kubenswrapper[4835]: I0201 07:33:48.951226 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147369ac-5553-4aa7-944b-878065951228-kube-api-access-z74gg" (OuterVolumeSpecName: "kube-api-access-z74gg") pod "147369ac-5553-4aa7-944b-878065951228" (UID: "147369ac-5553-4aa7-944b-878065951228"). InnerVolumeSpecName "kube-api-access-z74gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:33:48 crc kubenswrapper[4835]: I0201 07:33:48.961139 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-util" (OuterVolumeSpecName: "util") pod "147369ac-5553-4aa7-944b-878065951228" (UID: "147369ac-5553-4aa7-944b-878065951228"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:33:49 crc kubenswrapper[4835]: I0201 07:33:49.044995 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z74gg\" (UniqueName: \"kubernetes.io/projected/147369ac-5553-4aa7-944b-878065951228-kube-api-access-z74gg\") on node \"crc\" DevicePath \"\"" Feb 01 07:33:49 crc kubenswrapper[4835]: I0201 07:33:49.045097 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-util\") on node \"crc\" DevicePath \"\"" Feb 01 07:33:49 crc kubenswrapper[4835]: I0201 07:33:49.045131 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:33:49 crc kubenswrapper[4835]: I0201 07:33:49.523768 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" event={"ID":"147369ac-5553-4aa7-944b-878065951228","Type":"ContainerDied","Data":"d573702509e39abe1f6be9b25d45427b6341b4d0d68e7744d35c16256ab074ec"} Feb 01 07:33:49 crc kubenswrapper[4835]: I0201 07:33:49.523857 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d573702509e39abe1f6be9b25d45427b6341b4d0d68e7744d35c16256ab074ec" Feb 01 07:33:49 crc kubenswrapper[4835]: I0201 07:33:49.523864 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.049119 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd"] Feb 01 07:33:57 crc kubenswrapper[4835]: E0201 07:33:57.050481 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147369ac-5553-4aa7-944b-878065951228" containerName="util" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.050507 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="147369ac-5553-4aa7-944b-878065951228" containerName="util" Feb 01 07:33:57 crc kubenswrapper[4835]: E0201 07:33:57.050611 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147369ac-5553-4aa7-944b-878065951228" containerName="pull" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.050622 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="147369ac-5553-4aa7-944b-878065951228" containerName="pull" Feb 01 07:33:57 crc kubenswrapper[4835]: E0201 07:33:57.050649 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147369ac-5553-4aa7-944b-878065951228" containerName="extract" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.050657 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="147369ac-5553-4aa7-944b-878065951228" containerName="extract" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.051146 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="147369ac-5553-4aa7-944b-878065951228" containerName="extract" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.052199 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.065054 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.066070 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-4pq2v" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.066172 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.081005 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd"] Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.166596 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73820432-e4ca-45a7-ae9c-77a538ce1d20-apiservice-cert\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.166660 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73820432-e4ca-45a7-ae9c-77a538ce1d20-webhook-cert\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.166751 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v44b\" (UniqueName: \"kubernetes.io/projected/73820432-e4ca-45a7-ae9c-77a538ce1d20-kube-api-access-4v44b\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.267916 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73820432-e4ca-45a7-ae9c-77a538ce1d20-apiservice-cert\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.268023 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73820432-e4ca-45a7-ae9c-77a538ce1d20-webhook-cert\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.268263 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v44b\" (UniqueName: \"kubernetes.io/projected/73820432-e4ca-45a7-ae9c-77a538ce1d20-kube-api-access-4v44b\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.274732 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73820432-e4ca-45a7-ae9c-77a538ce1d20-apiservice-cert\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.274827 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73820432-e4ca-45a7-ae9c-77a538ce1d20-webhook-cert\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.289940 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v44b\" (UniqueName: \"kubernetes.io/projected/73820432-e4ca-45a7-ae9c-77a538ce1d20-kube-api-access-4v44b\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.390768 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.643117 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd"] Feb 01 07:33:58 crc kubenswrapper[4835]: I0201 07:33:58.585225 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" event={"ID":"73820432-e4ca-45a7-ae9c-77a538ce1d20","Type":"ContainerStarted","Data":"fe87356c43c61ae626b92e1fc497af9eb35c84218fac5ca5f3154727b19e8a50"} Feb 01 07:34:01 crc kubenswrapper[4835]: I0201 07:34:01.604829 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" event={"ID":"73820432-e4ca-45a7-ae9c-77a538ce1d20","Type":"ContainerStarted","Data":"dc732c1444db053ba7c64a76b53a633a3e4530b6828a731db27d025313abf3db"} Feb 01 07:34:01 crc kubenswrapper[4835]: I0201 07:34:01.605283 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:34:01 crc kubenswrapper[4835]: I0201 07:34:01.627764 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" podStartSLOduration=1.173798282 podStartE2EDuration="4.6277403s" podCreationTimestamp="2026-02-01 07:33:57 +0000 UTC" firstStartedPulling="2026-02-01 07:33:57.661480947 +0000 UTC m=+710.781917381" lastFinishedPulling="2026-02-01 07:34:01.115422965 +0000 UTC m=+714.235859399" observedRunningTime="2026-02-01 07:34:01.624992818 +0000 UTC m=+714.745429282" watchObservedRunningTime="2026-02-01 07:34:01.6277403 +0000 UTC m=+714.748176754" Feb 01 07:34:07 crc kubenswrapper[4835]: I0201 07:34:07.398769 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:34:13 crc kubenswrapper[4835]: I0201 07:34:13.698944 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-x9r54"] Feb 01 07:34:13 crc kubenswrapper[4835]: I0201 07:34:13.702094 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:13 crc kubenswrapper[4835]: I0201 07:34:13.705694 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-gpxdj" Feb 01 07:34:13 crc kubenswrapper[4835]: I0201 07:34:13.720459 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-x9r54"] Feb 01 07:34:13 crc kubenswrapper[4835]: I0201 07:34:13.734983 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lftl\" (UniqueName: \"kubernetes.io/projected/c754e3d7-d607-4427-b349-b5c22df261ec-kube-api-access-2lftl\") pod \"infra-operator-index-x9r54\" (UID: \"c754e3d7-d607-4427-b349-b5c22df261ec\") " pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:13 crc kubenswrapper[4835]: I0201 07:34:13.836233 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lftl\" (UniqueName: \"kubernetes.io/projected/c754e3d7-d607-4427-b349-b5c22df261ec-kube-api-access-2lftl\") pod \"infra-operator-index-x9r54\" (UID: \"c754e3d7-d607-4427-b349-b5c22df261ec\") " pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:13 crc kubenswrapper[4835]: I0201 07:34:13.862018 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lftl\" (UniqueName: \"kubernetes.io/projected/c754e3d7-d607-4427-b349-b5c22df261ec-kube-api-access-2lftl\") pod \"infra-operator-index-x9r54\" (UID: \"c754e3d7-d607-4427-b349-b5c22df261ec\") " pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:14 crc kubenswrapper[4835]: I0201 07:34:14.034348 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:14 crc kubenswrapper[4835]: I0201 07:34:14.546312 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-x9r54"] Feb 01 07:34:14 crc kubenswrapper[4835]: W0201 07:34:14.555236 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc754e3d7_d607_4427_b349_b5c22df261ec.slice/crio-661626f5d57ce8829d807379c8f6446cd04b089fc237bd653180432fdd96099d WatchSource:0}: Error finding container 661626f5d57ce8829d807379c8f6446cd04b089fc237bd653180432fdd96099d: Status 404 returned error can't find the container with id 661626f5d57ce8829d807379c8f6446cd04b089fc237bd653180432fdd96099d Feb 01 07:34:14 crc kubenswrapper[4835]: I0201 07:34:14.698693 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-x9r54" event={"ID":"c754e3d7-d607-4427-b349-b5c22df261ec","Type":"ContainerStarted","Data":"661626f5d57ce8829d807379c8f6446cd04b089fc237bd653180432fdd96099d"} Feb 01 07:34:16 crc kubenswrapper[4835]: I0201 07:34:16.711626 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-x9r54" event={"ID":"c754e3d7-d607-4427-b349-b5c22df261ec","Type":"ContainerStarted","Data":"d6fe90ef260d00d9323d7bac74882c87a064150e0506e70d2167ad57d285ccd5"} Feb 01 07:34:16 crc kubenswrapper[4835]: I0201 07:34:16.731578 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-x9r54" podStartSLOduration=1.985478617 podStartE2EDuration="3.73156103s" podCreationTimestamp="2026-02-01 07:34:13 +0000 UTC" firstStartedPulling="2026-02-01 07:34:14.557981672 +0000 UTC m=+727.678418146" lastFinishedPulling="2026-02-01 07:34:16.304064085 +0000 UTC m=+729.424500559" observedRunningTime="2026-02-01 07:34:16.731128349 +0000 UTC m=+729.851564803" watchObservedRunningTime="2026-02-01 07:34:16.73156103 +0000 UTC m=+729.851997464" Feb 01 07:34:24 crc kubenswrapper[4835]: I0201 07:34:24.034666 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:24 crc kubenswrapper[4835]: I0201 07:34:24.035013 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:24 crc kubenswrapper[4835]: I0201 07:34:24.070350 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:24 crc kubenswrapper[4835]: I0201 07:34:24.812269 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:25 crc kubenswrapper[4835]: I0201 07:34:25.192395 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:34:25 crc kubenswrapper[4835]: I0201 07:34:25.192563 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:34:26 crc kubenswrapper[4835]: I0201 07:34:26.959284 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4"] Feb 01 07:34:26 crc kubenswrapper[4835]: I0201 07:34:26.960613 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:26 crc kubenswrapper[4835]: I0201 07:34:26.963371 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j4xxm" Feb 01 07:34:26 crc kubenswrapper[4835]: I0201 07:34:26.979344 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4"] Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.130196 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.130251 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.130304 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbw7m\" (UniqueName: \"kubernetes.io/projected/4326f882-2be0-41a9-b71d-14e811ba9343-kube-api-access-cbw7m\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.231657 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.231772 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.231880 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbw7m\" (UniqueName: \"kubernetes.io/projected/4326f882-2be0-41a9-b71d-14e811ba9343-kube-api-access-cbw7m\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.232569 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.232720 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.262918 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbw7m\" (UniqueName: \"kubernetes.io/projected/4326f882-2be0-41a9-b71d-14e811ba9343-kube-api-access-cbw7m\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.331348 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.637536 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4"] Feb 01 07:34:27 crc kubenswrapper[4835]: W0201 07:34:27.647838 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4326f882_2be0_41a9_b71d_14e811ba9343.slice/crio-921256d20a3de711ad5d3567cd7908b61c7f1e0b4d2d8cfc13ded53d8a7ffa7a WatchSource:0}: Error finding container 921256d20a3de711ad5d3567cd7908b61c7f1e0b4d2d8cfc13ded53d8a7ffa7a: Status 404 returned error can't find the container with id 921256d20a3de711ad5d3567cd7908b61c7f1e0b4d2d8cfc13ded53d8a7ffa7a Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.795900 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" event={"ID":"4326f882-2be0-41a9-b71d-14e811ba9343","Type":"ContainerStarted","Data":"921256d20a3de711ad5d3567cd7908b61c7f1e0b4d2d8cfc13ded53d8a7ffa7a"} Feb 01 07:34:28 crc kubenswrapper[4835]: I0201 07:34:28.806789 4835 generic.go:334] "Generic (PLEG): container finished" podID="4326f882-2be0-41a9-b71d-14e811ba9343" containerID="9c9b24f7ffd0500deb0b44af392fbcd90e3501df8690512163f2c78ecc5f2750" exitCode=0 Feb 01 07:34:28 crc kubenswrapper[4835]: I0201 07:34:28.806860 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" event={"ID":"4326f882-2be0-41a9-b71d-14e811ba9343","Type":"ContainerDied","Data":"9c9b24f7ffd0500deb0b44af392fbcd90e3501df8690512163f2c78ecc5f2750"} Feb 01 07:34:29 crc kubenswrapper[4835]: I0201 07:34:29.815370 4835 generic.go:334] "Generic (PLEG): container finished" podID="4326f882-2be0-41a9-b71d-14e811ba9343" containerID="80600d67a82faedb2773f73ef514dfb8a4b47134e6d54fcb4ca036b44387978b" exitCode=0 Feb 01 07:34:29 crc kubenswrapper[4835]: I0201 07:34:29.815496 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" event={"ID":"4326f882-2be0-41a9-b71d-14e811ba9343","Type":"ContainerDied","Data":"80600d67a82faedb2773f73ef514dfb8a4b47134e6d54fcb4ca036b44387978b"} Feb 01 07:34:30 crc kubenswrapper[4835]: I0201 07:34:30.833800 4835 generic.go:334] "Generic (PLEG): container finished" podID="4326f882-2be0-41a9-b71d-14e811ba9343" containerID="662c3275e759f91f96bafbb45c12983ad019e7e1b8c42648d4fe1b80527cc463" exitCode=0 Feb 01 07:34:30 crc kubenswrapper[4835]: I0201 07:34:30.833847 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" event={"ID":"4326f882-2be0-41a9-b71d-14e811ba9343","Type":"ContainerDied","Data":"662c3275e759f91f96bafbb45c12983ad019e7e1b8c42648d4fe1b80527cc463"} Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.185127 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.304811 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbw7m\" (UniqueName: \"kubernetes.io/projected/4326f882-2be0-41a9-b71d-14e811ba9343-kube-api-access-cbw7m\") pod \"4326f882-2be0-41a9-b71d-14e811ba9343\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.304944 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-util\") pod \"4326f882-2be0-41a9-b71d-14e811ba9343\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.305216 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-bundle\") pod \"4326f882-2be0-41a9-b71d-14e811ba9343\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.319135 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-bundle" (OuterVolumeSpecName: "bundle") pod "4326f882-2be0-41a9-b71d-14e811ba9343" (UID: "4326f882-2be0-41a9-b71d-14e811ba9343"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.325093 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4326f882-2be0-41a9-b71d-14e811ba9343-kube-api-access-cbw7m" (OuterVolumeSpecName: "kube-api-access-cbw7m") pod "4326f882-2be0-41a9-b71d-14e811ba9343" (UID: "4326f882-2be0-41a9-b71d-14e811ba9343"). InnerVolumeSpecName "kube-api-access-cbw7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.333680 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-util" (OuterVolumeSpecName: "util") pod "4326f882-2be0-41a9-b71d-14e811ba9343" (UID: "4326f882-2be0-41a9-b71d-14e811ba9343"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.406790 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.406819 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbw7m\" (UniqueName: \"kubernetes.io/projected/4326f882-2be0-41a9-b71d-14e811ba9343-kube-api-access-cbw7m\") on node \"crc\" DevicePath \"\"" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.406830 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-util\") on node \"crc\" DevicePath \"\"" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.869722 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" event={"ID":"4326f882-2be0-41a9-b71d-14e811ba9343","Type":"ContainerDied","Data":"921256d20a3de711ad5d3567cd7908b61c7f1e0b4d2d8cfc13ded53d8a7ffa7a"} Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.869783 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921256d20a3de711ad5d3567cd7908b61c7f1e0b4d2d8cfc13ded53d8a7ffa7a" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.869802 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:38 crc kubenswrapper[4835]: I0201 07:34:38.925765 4835 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.960055 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Feb 01 07:34:42 crc kubenswrapper[4835]: E0201 07:34:42.960832 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4326f882-2be0-41a9-b71d-14e811ba9343" containerName="pull" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.960855 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4326f882-2be0-41a9-b71d-14e811ba9343" containerName="pull" Feb 01 07:34:42 crc kubenswrapper[4835]: E0201 07:34:42.960884 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4326f882-2be0-41a9-b71d-14e811ba9343" containerName="util" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.960897 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4326f882-2be0-41a9-b71d-14e811ba9343" containerName="util" Feb 01 07:34:42 crc kubenswrapper[4835]: E0201 07:34:42.960918 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4326f882-2be0-41a9-b71d-14e811ba9343" containerName="extract" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.960930 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4326f882-2be0-41a9-b71d-14e811ba9343" containerName="extract" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.961141 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="4326f882-2be0-41a9-b71d-14e811ba9343" containerName="extract" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.962155 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.965018 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openstack-config-data" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.965291 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openstack-scripts" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.966824 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"galera-openstack-dockercfg-cp2rc" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.970705 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openshift-service-ca.crt" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.974129 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.975859 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.980798 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.981935 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.983036 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"kube-root-ca.crt" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.988146 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.028395 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.031700 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.067129 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x5pt\" (UniqueName: \"kubernetes.io/projected/d1414aa9-85a0-4ed8-b897-0afc315eacf6-kube-api-access-6x5pt\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.067194 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-config-data-default\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.067276 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.069649 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-kolla-config\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.069687 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d1414aa9-85a0-4ed8-b897-0afc315eacf6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.069763 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170580 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170628 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7t9z\" (UniqueName: \"kubernetes.io/projected/b44d32e5-044c-42e2-a6c8-eb93e48219f2-kube-api-access-k7t9z\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170659 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170689 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-kolla-config\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170737 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x5pt\" (UniqueName: \"kubernetes.io/projected/d1414aa9-85a0-4ed8-b897-0afc315eacf6-kube-api-access-6x5pt\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170832 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-operator-scripts\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170907 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-kolla-config\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170953 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170990 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-config-data-default\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171024 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171106 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f271d73a-6ed8-4c97-b087-c6b3287c11e4-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171138 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdlcc\" (UniqueName: \"kubernetes.io/projected/f271d73a-6ed8-4c97-b087-c6b3287c11e4-kube-api-access-tdlcc\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171143 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") device mount path \"/mnt/openstack/pv03\"" pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171180 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-config-data-default\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171263 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b44d32e5-044c-42e2-a6c8-eb93e48219f2-config-data-generated\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171317 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-kolla-config\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171352 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171383 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d1414aa9-85a0-4ed8-b897-0afc315eacf6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171458 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-config-data-default\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171877 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d1414aa9-85a0-4ed8-b897-0afc315eacf6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.172402 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-kolla-config\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.172467 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-config-data-default\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.173697 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.200517 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x5pt\" (UniqueName: \"kubernetes.io/projected/d1414aa9-85a0-4ed8-b897-0afc315eacf6-kube-api-access-6x5pt\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.203135 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.272623 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f271d73a-6ed8-4c97-b087-c6b3287c11e4-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.272667 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdlcc\" (UniqueName: \"kubernetes.io/projected/f271d73a-6ed8-4c97-b087-c6b3287c11e4-kube-api-access-tdlcc\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.272698 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-config-data-default\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.272715 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b44d32e5-044c-42e2-a6c8-eb93e48219f2-config-data-generated\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.272732 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273176 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b44d32e5-044c-42e2-a6c8-eb93e48219f2-config-data-generated\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273175 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") device mount path \"/mnt/openstack/pv01\"" pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273237 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f271d73a-6ed8-4c97-b087-c6b3287c11e4-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273368 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-config-data-default\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273470 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7t9z\" (UniqueName: \"kubernetes.io/projected/b44d32e5-044c-42e2-a6c8-eb93e48219f2-kube-api-access-k7t9z\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273531 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273554 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-config-data-default\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273602 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-kolla-config\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273671 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-operator-scripts\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273720 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-kolla-config\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273766 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.274012 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-config-data-default\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.274079 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") device mount path \"/mnt/openstack/pv09\"" pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.274203 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-kolla-config\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.274685 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-kolla-config\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.275158 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.276020 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-operator-scripts\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.291713 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.292962 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7t9z\" (UniqueName: \"kubernetes.io/projected/b44d32e5-044c-42e2-a6c8-eb93e48219f2-kube-api-access-k7t9z\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.296618 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.297321 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdlcc\" (UniqueName: \"kubernetes.io/projected/f271d73a-6ed8-4c97-b087-c6b3287c11e4-kube-api-access-tdlcc\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.319973 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.332762 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.341853 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.606060 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv"] Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.607616 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.609852 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-n92d9" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.610074 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.630313 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv"] Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.679136 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aeafdd64-5ab8-429a-9411-bdfe3e0780af-webhook-cert\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.679189 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm5r7\" (UniqueName: \"kubernetes.io/projected/aeafdd64-5ab8-429a-9411-bdfe3e0780af-kube-api-access-xm5r7\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.679283 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aeafdd64-5ab8-429a-9411-bdfe3e0780af-apiservice-cert\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.744394 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Feb 01 07:34:43 crc kubenswrapper[4835]: W0201 07:34:43.746592 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf271d73a_6ed8_4c97_b087_c6b3287c11e4.slice/crio-1d3810bd40290d001c50d611127deeb375a9c037efa9f0257f26a45a9804034a WatchSource:0}: Error finding container 1d3810bd40290d001c50d611127deeb375a9c037efa9f0257f26a45a9804034a: Status 404 returned error can't find the container with id 1d3810bd40290d001c50d611127deeb375a9c037efa9f0257f26a45a9804034a Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.780744 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aeafdd64-5ab8-429a-9411-bdfe3e0780af-apiservice-cert\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.780812 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aeafdd64-5ab8-429a-9411-bdfe3e0780af-webhook-cert\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.780845 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm5r7\" (UniqueName: \"kubernetes.io/projected/aeafdd64-5ab8-429a-9411-bdfe3e0780af-kube-api-access-xm5r7\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.787156 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aeafdd64-5ab8-429a-9411-bdfe3e0780af-webhook-cert\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.789620 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aeafdd64-5ab8-429a-9411-bdfe3e0780af-apiservice-cert\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.791848 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Feb 01 07:34:43 crc kubenswrapper[4835]: W0201 07:34:43.798990 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1414aa9_85a0_4ed8_b897_0afc315eacf6.slice/crio-3b0e6951be28475ffa536bb1320fcf2372d6afbdc45911840537b15dc1039aad WatchSource:0}: Error finding container 3b0e6951be28475ffa536bb1320fcf2372d6afbdc45911840537b15dc1039aad: Status 404 returned error can't find the container with id 3b0e6951be28475ffa536bb1320fcf2372d6afbdc45911840537b15dc1039aad Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.799673 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm5r7\" (UniqueName: \"kubernetes.io/projected/aeafdd64-5ab8-429a-9411-bdfe3e0780af-kube-api-access-xm5r7\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.800399 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.923687 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.941740 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"b44d32e5-044c-42e2-a6c8-eb93e48219f2","Type":"ContainerStarted","Data":"7e471ffb79fbd39d2af050977d6f3db82bc4757feb802c7704aeb2c0eca8ced0"} Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.942495 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"d1414aa9-85a0-4ed8-b897-0afc315eacf6","Type":"ContainerStarted","Data":"3b0e6951be28475ffa536bb1320fcf2372d6afbdc45911840537b15dc1039aad"} Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.943080 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"f271d73a-6ed8-4c97-b087-c6b3287c11e4","Type":"ContainerStarted","Data":"1d3810bd40290d001c50d611127deeb375a9c037efa9f0257f26a45a9804034a"} Feb 01 07:34:44 crc kubenswrapper[4835]: I0201 07:34:44.099943 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv"] Feb 01 07:34:44 crc kubenswrapper[4835]: W0201 07:34:44.110882 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaeafdd64_5ab8_429a_9411_bdfe3e0780af.slice/crio-1672c7f959d52e883053e13c851994f62f2737d3c02c23a393421e694aa21675 WatchSource:0}: Error finding container 1672c7f959d52e883053e13c851994f62f2737d3c02c23a393421e694aa21675: Status 404 returned error can't find the container with id 1672c7f959d52e883053e13c851994f62f2737d3c02c23a393421e694aa21675 Feb 01 07:34:44 crc kubenswrapper[4835]: I0201 07:34:44.951222 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" event={"ID":"aeafdd64-5ab8-429a-9411-bdfe3e0780af","Type":"ContainerStarted","Data":"1672c7f959d52e883053e13c851994f62f2737d3c02c23a393421e694aa21675"} Feb 01 07:34:53 crc kubenswrapper[4835]: I0201 07:34:53.000539 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"f271d73a-6ed8-4c97-b087-c6b3287c11e4","Type":"ContainerStarted","Data":"3102c90c1c13ed3302574e01fc958fc256bd73d4817ea2a1f116bf8dc4be7f22"} Feb 01 07:34:53 crc kubenswrapper[4835]: I0201 07:34:53.002729 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"b44d32e5-044c-42e2-a6c8-eb93e48219f2","Type":"ContainerStarted","Data":"0025e4ef285b635223e56a201cfd8fde36b2b0eedf19340b5ef5dc6e2e9e082c"} Feb 01 07:34:53 crc kubenswrapper[4835]: I0201 07:34:53.004708 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" event={"ID":"aeafdd64-5ab8-429a-9411-bdfe3e0780af","Type":"ContainerStarted","Data":"92d3f3d392b746f124282379c6f72ca567746e7e14c773d94d3fcb1bccc20102"} Feb 01 07:34:53 crc kubenswrapper[4835]: I0201 07:34:53.004928 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:53 crc kubenswrapper[4835]: I0201 07:34:53.006588 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"d1414aa9-85a0-4ed8-b897-0afc315eacf6","Type":"ContainerStarted","Data":"62ad44fb76befa2a607f268f2d68073d67fe82504db5ad8b1a0ef4eff4c5da7b"} Feb 01 07:34:53 crc kubenswrapper[4835]: I0201 07:34:53.123215 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" podStartSLOduration=2.517100058 podStartE2EDuration="10.123182724s" podCreationTimestamp="2026-02-01 07:34:43 +0000 UTC" firstStartedPulling="2026-02-01 07:34:44.114500401 +0000 UTC m=+757.234936835" lastFinishedPulling="2026-02-01 07:34:51.720583067 +0000 UTC m=+764.841019501" observedRunningTime="2026-02-01 07:34:53.116651322 +0000 UTC m=+766.237087846" watchObservedRunningTime="2026-02-01 07:34:53.123182724 +0000 UTC m=+766.243619198" Feb 01 07:34:55 crc kubenswrapper[4835]: I0201 07:34:55.192209 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:34:55 crc kubenswrapper[4835]: I0201 07:34:55.192842 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:34:56 crc kubenswrapper[4835]: I0201 07:34:56.028761 4835 generic.go:334] "Generic (PLEG): container finished" podID="b44d32e5-044c-42e2-a6c8-eb93e48219f2" containerID="0025e4ef285b635223e56a201cfd8fde36b2b0eedf19340b5ef5dc6e2e9e082c" exitCode=0 Feb 01 07:34:56 crc kubenswrapper[4835]: I0201 07:34:56.028855 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"b44d32e5-044c-42e2-a6c8-eb93e48219f2","Type":"ContainerDied","Data":"0025e4ef285b635223e56a201cfd8fde36b2b0eedf19340b5ef5dc6e2e9e082c"} Feb 01 07:34:56 crc kubenswrapper[4835]: I0201 07:34:56.031964 4835 generic.go:334] "Generic (PLEG): container finished" podID="d1414aa9-85a0-4ed8-b897-0afc315eacf6" containerID="62ad44fb76befa2a607f268f2d68073d67fe82504db5ad8b1a0ef4eff4c5da7b" exitCode=0 Feb 01 07:34:56 crc kubenswrapper[4835]: I0201 07:34:56.032020 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"d1414aa9-85a0-4ed8-b897-0afc315eacf6","Type":"ContainerDied","Data":"62ad44fb76befa2a607f268f2d68073d67fe82504db5ad8b1a0ef4eff4c5da7b"} Feb 01 07:34:56 crc kubenswrapper[4835]: I0201 07:34:56.034972 4835 generic.go:334] "Generic (PLEG): container finished" podID="f271d73a-6ed8-4c97-b087-c6b3287c11e4" containerID="3102c90c1c13ed3302574e01fc958fc256bd73d4817ea2a1f116bf8dc4be7f22" exitCode=0 Feb 01 07:34:56 crc kubenswrapper[4835]: I0201 07:34:56.035011 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"f271d73a-6ed8-4c97-b087-c6b3287c11e4","Type":"ContainerDied","Data":"3102c90c1c13ed3302574e01fc958fc256bd73d4817ea2a1f116bf8dc4be7f22"} Feb 01 07:34:57 crc kubenswrapper[4835]: I0201 07:34:57.043507 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"b44d32e5-044c-42e2-a6c8-eb93e48219f2","Type":"ContainerStarted","Data":"6520b4b11e397559bd49700232e2ead795f17a06a1246be3adaf7e7ad5bfa961"} Feb 01 07:34:57 crc kubenswrapper[4835]: I0201 07:34:57.045783 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"d1414aa9-85a0-4ed8-b897-0afc315eacf6","Type":"ContainerStarted","Data":"57146b43238d7b8a5f249537accc3d9eaa5ea3c7779ae2ff051551cd15cbe2bf"} Feb 01 07:34:57 crc kubenswrapper[4835]: I0201 07:34:57.047642 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"f271d73a-6ed8-4c97-b087-c6b3287c11e4","Type":"ContainerStarted","Data":"39f54e10bdf1a5f7f7b43638c953b8eecae82ae7d71f742fdc764445e1ccc533"} Feb 01 07:34:57 crc kubenswrapper[4835]: I0201 07:34:57.076525 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-1" podStartSLOduration=8.062633277 podStartE2EDuration="16.07650968s" podCreationTimestamp="2026-02-01 07:34:41 +0000 UTC" firstStartedPulling="2026-02-01 07:34:43.809693833 +0000 UTC m=+756.930130267" lastFinishedPulling="2026-02-01 07:34:51.823570226 +0000 UTC m=+764.944006670" observedRunningTime="2026-02-01 07:34:57.072839863 +0000 UTC m=+770.193276287" watchObservedRunningTime="2026-02-01 07:34:57.07650968 +0000 UTC m=+770.196946114" Feb 01 07:34:57 crc kubenswrapper[4835]: I0201 07:34:57.118159 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-2" podStartSLOduration=7.99468188 podStartE2EDuration="16.118140225s" podCreationTimestamp="2026-02-01 07:34:41 +0000 UTC" firstStartedPulling="2026-02-01 07:34:43.748672188 +0000 UTC m=+756.869108622" lastFinishedPulling="2026-02-01 07:34:51.872130533 +0000 UTC m=+764.992566967" observedRunningTime="2026-02-01 07:34:57.109223511 +0000 UTC m=+770.229659945" watchObservedRunningTime="2026-02-01 07:34:57.118140225 +0000 UTC m=+770.238576659" Feb 01 07:34:57 crc kubenswrapper[4835]: I0201 07:34:57.128250 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-0" podStartSLOduration=8.088479647 podStartE2EDuration="16.12822902s" podCreationTimestamp="2026-02-01 07:34:41 +0000 UTC" firstStartedPulling="2026-02-01 07:34:43.80045198 +0000 UTC m=+756.920888414" lastFinishedPulling="2026-02-01 07:34:51.840201343 +0000 UTC m=+764.960637787" observedRunningTime="2026-02-01 07:34:57.125274853 +0000 UTC m=+770.245711307" watchObservedRunningTime="2026-02-01 07:34:57.12822902 +0000 UTC m=+770.248665474" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.320880 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.321177 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.333042 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.333090 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.342960 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.343018 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.435640 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.929761 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:35:04 crc kubenswrapper[4835]: I0201 07:35:04.168629 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.448281 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/memcached-0"] Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.449584 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.451782 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"memcached-config-data" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.452138 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"memcached-memcached-dockercfg-sj2kx" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.464865 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/memcached-0"] Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.551324 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-kolla-config\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.551404 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-config-data\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.551446 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2jxq\" (UniqueName: \"kubernetes.io/projected/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-kube-api-access-r2jxq\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.652610 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-kolla-config\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.652673 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-config-data\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.652695 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2jxq\" (UniqueName: \"kubernetes.io/projected/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-kube-api-access-r2jxq\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.655786 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"memcached-config-data" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.664739 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-config-data\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.664745 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-kolla-config\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.679438 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2jxq\" (UniqueName: \"kubernetes.io/projected/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-kube-api-access-r2jxq\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.767337 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"memcached-memcached-dockercfg-sj2kx" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.776496 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:08 crc kubenswrapper[4835]: I0201 07:35:08.998501 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/memcached-0"] Feb 01 07:35:09 crc kubenswrapper[4835]: I0201 07:35:09.121217 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"37529abc-a5d7-416b-8ea4-c6f0542ab3a8","Type":"ContainerStarted","Data":"4f04820b8f75f969f44cef21bb4b9f31f45d7190d2214ef49b7cbe3ffe8ac3bf"} Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.323045 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nztp8"] Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.324329 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.327924 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-2dvzh" Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.334394 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nztp8"] Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.389801 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9v5b\" (UniqueName: \"kubernetes.io/projected/be408dba-dcbf-40e4-9b83-cd67424ad82d-kube-api-access-d9v5b\") pod \"rabbitmq-cluster-operator-index-nztp8\" (UID: \"be408dba-dcbf-40e4-9b83-cd67424ad82d\") " pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.491256 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9v5b\" (UniqueName: \"kubernetes.io/projected/be408dba-dcbf-40e4-9b83-cd67424ad82d-kube-api-access-d9v5b\") pod \"rabbitmq-cluster-operator-index-nztp8\" (UID: \"be408dba-dcbf-40e4-9b83-cd67424ad82d\") " pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.506948 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9v5b\" (UniqueName: \"kubernetes.io/projected/be408dba-dcbf-40e4-9b83-cd67424ad82d-kube-api-access-d9v5b\") pod \"rabbitmq-cluster-operator-index-nztp8\" (UID: \"be408dba-dcbf-40e4-9b83-cd67424ad82d\") " pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.648288 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.861669 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nztp8"] Feb 01 07:35:11 crc kubenswrapper[4835]: I0201 07:35:11.136046 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" event={"ID":"be408dba-dcbf-40e4-9b83-cd67424ad82d","Type":"ContainerStarted","Data":"62af7412c494861b55af2471e0613e66a5f97e9faedbd7e1992431b82f2e9547"} Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.051597 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/root-account-create-update-gmb7x"] Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.052296 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.054181 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"openstack-mariadb-root-db-secret" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.061307 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-gmb7x"] Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.113119 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-operator-scripts\") pod \"root-account-create-update-gmb7x\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.113555 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74qnb\" (UniqueName: \"kubernetes.io/projected/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-kube-api-access-74qnb\") pod \"root-account-create-update-gmb7x\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.142662 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"37529abc-a5d7-416b-8ea4-c6f0542ab3a8","Type":"ContainerStarted","Data":"e2b27039e88a5fec5a52799cecf637333dc65696640cbb74a7d2047b185e305b"} Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.142801 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.175132 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/memcached-0" podStartSLOduration=2.412485897 podStartE2EDuration="5.175118091s" podCreationTimestamp="2026-02-01 07:35:07 +0000 UTC" firstStartedPulling="2026-02-01 07:35:09.007572176 +0000 UTC m=+782.128008610" lastFinishedPulling="2026-02-01 07:35:11.77020437 +0000 UTC m=+784.890640804" observedRunningTime="2026-02-01 07:35:12.173806077 +0000 UTC m=+785.294242511" watchObservedRunningTime="2026-02-01 07:35:12.175118091 +0000 UTC m=+785.295554525" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.215047 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-operator-scripts\") pod \"root-account-create-update-gmb7x\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.215125 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74qnb\" (UniqueName: \"kubernetes.io/projected/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-kube-api-access-74qnb\") pod \"root-account-create-update-gmb7x\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.216898 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-operator-scripts\") pod \"root-account-create-update-gmb7x\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.238089 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74qnb\" (UniqueName: \"kubernetes.io/projected/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-kube-api-access-74qnb\") pod \"root-account-create-update-gmb7x\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.365632 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.884823 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-gmb7x"] Feb 01 07:35:12 crc kubenswrapper[4835]: W0201 07:35:12.959919 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a95fd7f_8f31_420b_a847_e13f61aa0ce9.slice/crio-7226295b3efc2f725a477177a12ac9d338f0a03987fdfaa8e30d0b9203079cf4 WatchSource:0}: Error finding container 7226295b3efc2f725a477177a12ac9d338f0a03987fdfaa8e30d0b9203079cf4: Status 404 returned error can't find the container with id 7226295b3efc2f725a477177a12ac9d338f0a03987fdfaa8e30d0b9203079cf4 Feb 01 07:35:13 crc kubenswrapper[4835]: I0201 07:35:13.179699 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-gmb7x" event={"ID":"5a95fd7f-8f31-420b-a847-e13f61aa0ce9","Type":"ContainerStarted","Data":"4150461df03e979f73af252c924d3235e5873da5e6ee9fff2b41bd3c4a7515a0"} Feb 01 07:35:13 crc kubenswrapper[4835]: I0201 07:35:13.179961 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-gmb7x" event={"ID":"5a95fd7f-8f31-420b-a847-e13f61aa0ce9","Type":"ContainerStarted","Data":"7226295b3efc2f725a477177a12ac9d338f0a03987fdfaa8e30d0b9203079cf4"} Feb 01 07:35:13 crc kubenswrapper[4835]: I0201 07:35:13.197486 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/root-account-create-update-gmb7x" podStartSLOduration=1.197469865 podStartE2EDuration="1.197469865s" podCreationTimestamp="2026-02-01 07:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:35:13.194920638 +0000 UTC m=+786.315357072" watchObservedRunningTime="2026-02-01 07:35:13.197469865 +0000 UTC m=+786.317906299" Feb 01 07:35:13 crc kubenswrapper[4835]: I0201 07:35:13.449993 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/openstack-galera-2" podUID="f271d73a-6ed8-4c97-b087-c6b3287c11e4" containerName="galera" probeResult="failure" output=< Feb 01 07:35:13 crc kubenswrapper[4835]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Feb 01 07:35:13 crc kubenswrapper[4835]: > Feb 01 07:35:14 crc kubenswrapper[4835]: I0201 07:35:14.574222 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:35:14 crc kubenswrapper[4835]: I0201 07:35:14.665455 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:35:15 crc kubenswrapper[4835]: I0201 07:35:15.190177 4835 generic.go:334] "Generic (PLEG): container finished" podID="5a95fd7f-8f31-420b-a847-e13f61aa0ce9" containerID="4150461df03e979f73af252c924d3235e5873da5e6ee9fff2b41bd3c4a7515a0" exitCode=0 Feb 01 07:35:15 crc kubenswrapper[4835]: I0201 07:35:15.190209 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-gmb7x" event={"ID":"5a95fd7f-8f31-420b-a847-e13f61aa0ce9","Type":"ContainerDied","Data":"4150461df03e979f73af252c924d3235e5873da5e6ee9fff2b41bd3c4a7515a0"} Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.199489 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" event={"ID":"be408dba-dcbf-40e4-9b83-cd67424ad82d","Type":"ContainerStarted","Data":"42e61bae0028233cd887c1a2c734dd4cb60bba1e5e9473c8b0715142c0adab43"} Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.213090 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" podStartSLOduration=1.222088192 podStartE2EDuration="6.213070083s" podCreationTimestamp="2026-02-01 07:35:10 +0000 UTC" firstStartedPulling="2026-02-01 07:35:10.88331093 +0000 UTC m=+784.003747364" lastFinishedPulling="2026-02-01 07:35:15.874292821 +0000 UTC m=+788.994729255" observedRunningTime="2026-02-01 07:35:16.211874432 +0000 UTC m=+789.332310866" watchObservedRunningTime="2026-02-01 07:35:16.213070083 +0000 UTC m=+789.333506517" Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.554939 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.689520 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-operator-scripts\") pod \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.689607 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74qnb\" (UniqueName: \"kubernetes.io/projected/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-kube-api-access-74qnb\") pod \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.690206 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a95fd7f-8f31-420b-a847-e13f61aa0ce9" (UID: "5a95fd7f-8f31-420b-a847-e13f61aa0ce9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.700160 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-kube-api-access-74qnb" (OuterVolumeSpecName: "kube-api-access-74qnb") pod "5a95fd7f-8f31-420b-a847-e13f61aa0ce9" (UID: "5a95fd7f-8f31-420b-a847-e13f61aa0ce9"). InnerVolumeSpecName "kube-api-access-74qnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.792070 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.792128 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74qnb\" (UniqueName: \"kubernetes.io/projected/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-kube-api-access-74qnb\") on node \"crc\" DevicePath \"\"" Feb 01 07:35:17 crc kubenswrapper[4835]: I0201 07:35:17.209503 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:17 crc kubenswrapper[4835]: I0201 07:35:17.209525 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-gmb7x" event={"ID":"5a95fd7f-8f31-420b-a847-e13f61aa0ce9","Type":"ContainerDied","Data":"7226295b3efc2f725a477177a12ac9d338f0a03987fdfaa8e30d0b9203079cf4"} Feb 01 07:35:17 crc kubenswrapper[4835]: I0201 07:35:17.209601 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7226295b3efc2f725a477177a12ac9d338f0a03987fdfaa8e30d0b9203079cf4" Feb 01 07:35:17 crc kubenswrapper[4835]: I0201 07:35:17.778265 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:18 crc kubenswrapper[4835]: I0201 07:35:18.152112 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:35:18 crc kubenswrapper[4835]: I0201 07:35:18.246730 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:35:20 crc kubenswrapper[4835]: I0201 07:35:20.650078 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:20 crc kubenswrapper[4835]: I0201 07:35:20.652163 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:20 crc kubenswrapper[4835]: I0201 07:35:20.684152 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:21 crc kubenswrapper[4835]: I0201 07:35:21.271790 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.378684 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k"] Feb 01 07:35:23 crc kubenswrapper[4835]: E0201 07:35:23.381299 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a95fd7f-8f31-420b-a847-e13f61aa0ce9" containerName="mariadb-account-create-update" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.381313 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a95fd7f-8f31-420b-a847-e13f61aa0ce9" containerName="mariadb-account-create-update" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.381469 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a95fd7f-8f31-420b-a847-e13f61aa0ce9" containerName="mariadb-account-create-update" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.382866 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.386651 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j4xxm" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.411778 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k"] Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.513590 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.513678 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cnqd\" (UniqueName: \"kubernetes.io/projected/59f26b1b-b8b2-4479-8e35-a7a46c629d35-kube-api-access-9cnqd\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.513722 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.615582 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.615688 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnqd\" (UniqueName: \"kubernetes.io/projected/59f26b1b-b8b2-4479-8e35-a7a46c629d35-kube-api-access-9cnqd\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.615755 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.616816 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.616861 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.650276 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cnqd\" (UniqueName: \"kubernetes.io/projected/59f26b1b-b8b2-4479-8e35-a7a46c629d35-kube-api-access-9cnqd\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.710107 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.989367 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k"] Feb 01 07:35:23 crc kubenswrapper[4835]: W0201 07:35:23.992562 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59f26b1b_b8b2_4479_8e35_a7a46c629d35.slice/crio-d47765a40c02714a5797150b68058170a7f60687a3931c4b7538eea401edae64 WatchSource:0}: Error finding container d47765a40c02714a5797150b68058170a7f60687a3931c4b7538eea401edae64: Status 404 returned error can't find the container with id d47765a40c02714a5797150b68058170a7f60687a3931c4b7538eea401edae64 Feb 01 07:35:24 crc kubenswrapper[4835]: I0201 07:35:24.272342 4835 generic.go:334] "Generic (PLEG): container finished" podID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerID="b6026b1967a0afc8e6eaed5606a24b459a9c02ffdc13c1973a9ff9e81ba50c34" exitCode=0 Feb 01 07:35:24 crc kubenswrapper[4835]: I0201 07:35:24.272611 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" event={"ID":"59f26b1b-b8b2-4479-8e35-a7a46c629d35","Type":"ContainerDied","Data":"b6026b1967a0afc8e6eaed5606a24b459a9c02ffdc13c1973a9ff9e81ba50c34"} Feb 01 07:35:24 crc kubenswrapper[4835]: I0201 07:35:24.272709 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" event={"ID":"59f26b1b-b8b2-4479-8e35-a7a46c629d35","Type":"ContainerStarted","Data":"d47765a40c02714a5797150b68058170a7f60687a3931c4b7538eea401edae64"} Feb 01 07:35:25 crc kubenswrapper[4835]: I0201 07:35:25.191973 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:35:25 crc kubenswrapper[4835]: I0201 07:35:25.192057 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:35:25 crc kubenswrapper[4835]: I0201 07:35:25.192113 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:35:25 crc kubenswrapper[4835]: I0201 07:35:25.192924 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6da4a09917e14a43c6af10d69dcc7ba3d2cd41146e8c294ea85744f0374d0efa"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:35:25 crc kubenswrapper[4835]: I0201 07:35:25.193023 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://6da4a09917e14a43c6af10d69dcc7ba3d2cd41146e8c294ea85744f0374d0efa" gracePeriod=600 Feb 01 07:35:25 crc kubenswrapper[4835]: I0201 07:35:25.284499 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" event={"ID":"59f26b1b-b8b2-4479-8e35-a7a46c629d35","Type":"ContainerStarted","Data":"ea63aa4f3bbb37aa8d4856c69bcaaac3274e2ae13b60a20b3975aa15031337de"} Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.299626 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="6da4a09917e14a43c6af10d69dcc7ba3d2cd41146e8c294ea85744f0374d0efa" exitCode=0 Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.299691 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"6da4a09917e14a43c6af10d69dcc7ba3d2cd41146e8c294ea85744f0374d0efa"} Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.300059 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"9ccb60f81487a17626bf941abb39b090063342e92bdcf8f103587fb1912c3a05"} Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.300093 4835 scope.go:117] "RemoveContainer" containerID="377901096f8562233e3d8083b0c24e7e0a643028b79ddd39edcc7cb8ec54319f" Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.304547 4835 generic.go:334] "Generic (PLEG): container finished" podID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerID="ea63aa4f3bbb37aa8d4856c69bcaaac3274e2ae13b60a20b3975aa15031337de" exitCode=0 Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.304586 4835 generic.go:334] "Generic (PLEG): container finished" podID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerID="cf68852754c97d96b1e5ebd1c69c8edb15576653503f5a13562894c2eb5b15f5" exitCode=0 Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.304695 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" event={"ID":"59f26b1b-b8b2-4479-8e35-a7a46c629d35","Type":"ContainerDied","Data":"ea63aa4f3bbb37aa8d4856c69bcaaac3274e2ae13b60a20b3975aa15031337de"} Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.304740 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" event={"ID":"59f26b1b-b8b2-4479-8e35-a7a46c629d35","Type":"ContainerDied","Data":"cf68852754c97d96b1e5ebd1c69c8edb15576653503f5a13562894c2eb5b15f5"} Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.705719 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.780907 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cnqd\" (UniqueName: \"kubernetes.io/projected/59f26b1b-b8b2-4479-8e35-a7a46c629d35-kube-api-access-9cnqd\") pod \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.780976 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-bundle\") pod \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.781027 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-util\") pod \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.782054 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-bundle" (OuterVolumeSpecName: "bundle") pod "59f26b1b-b8b2-4479-8e35-a7a46c629d35" (UID: "59f26b1b-b8b2-4479-8e35-a7a46c629d35"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.793076 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f26b1b-b8b2-4479-8e35-a7a46c629d35-kube-api-access-9cnqd" (OuterVolumeSpecName: "kube-api-access-9cnqd") pod "59f26b1b-b8b2-4479-8e35-a7a46c629d35" (UID: "59f26b1b-b8b2-4479-8e35-a7a46c629d35"). InnerVolumeSpecName "kube-api-access-9cnqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.805591 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-util" (OuterVolumeSpecName: "util") pod "59f26b1b-b8b2-4479-8e35-a7a46c629d35" (UID: "59f26b1b-b8b2-4479-8e35-a7a46c629d35"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.882785 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cnqd\" (UniqueName: \"kubernetes.io/projected/59f26b1b-b8b2-4479-8e35-a7a46c629d35-kube-api-access-9cnqd\") on node \"crc\" DevicePath \"\"" Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.883229 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.883248 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-util\") on node \"crc\" DevicePath \"\"" Feb 01 07:35:28 crc kubenswrapper[4835]: I0201 07:35:28.328292 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" event={"ID":"59f26b1b-b8b2-4479-8e35-a7a46c629d35","Type":"ContainerDied","Data":"d47765a40c02714a5797150b68058170a7f60687a3931c4b7538eea401edae64"} Feb 01 07:35:28 crc kubenswrapper[4835]: I0201 07:35:28.328353 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d47765a40c02714a5797150b68058170a7f60687a3931c4b7538eea401edae64" Feb 01 07:35:28 crc kubenswrapper[4835]: I0201 07:35:28.328388 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:28 crc kubenswrapper[4835]: E0201 07:35:28.475395 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59f26b1b_b8b2_4479_8e35_a7a46c629d35.slice\": RecentStats: unable to find data in memory cache]" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.302007 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9"] Feb 01 07:35:35 crc kubenswrapper[4835]: E0201 07:35:35.302599 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerName="util" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.302610 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerName="util" Feb 01 07:35:35 crc kubenswrapper[4835]: E0201 07:35:35.302620 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerName="extract" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.302626 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerName="extract" Feb 01 07:35:35 crc kubenswrapper[4835]: E0201 07:35:35.302637 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerName="pull" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.302642 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerName="pull" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.302742 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerName="extract" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.303155 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.304874 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-xnddj" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.313728 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9"] Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.387812 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctsjp\" (UniqueName: \"kubernetes.io/projected/b76bd603-252c-4c26-a1c7-0009be5661be-kube-api-access-ctsjp\") pod \"rabbitmq-cluster-operator-779fc9694b-fhcz9\" (UID: \"b76bd603-252c-4c26-a1c7-0009be5661be\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.489319 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctsjp\" (UniqueName: \"kubernetes.io/projected/b76bd603-252c-4c26-a1c7-0009be5661be-kube-api-access-ctsjp\") pod \"rabbitmq-cluster-operator-779fc9694b-fhcz9\" (UID: \"b76bd603-252c-4c26-a1c7-0009be5661be\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.515611 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctsjp\" (UniqueName: \"kubernetes.io/projected/b76bd603-252c-4c26-a1c7-0009be5661be-kube-api-access-ctsjp\") pod \"rabbitmq-cluster-operator-779fc9694b-fhcz9\" (UID: \"b76bd603-252c-4c26-a1c7-0009be5661be\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.618727 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" Feb 01 07:35:36 crc kubenswrapper[4835]: I0201 07:35:36.109021 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9"] Feb 01 07:35:36 crc kubenswrapper[4835]: W0201 07:35:36.122845 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb76bd603_252c_4c26_a1c7_0009be5661be.slice/crio-c20e11167610d595b84c87f6d87f2aff893c291cf02c6ccd9268cebab5799fe2 WatchSource:0}: Error finding container c20e11167610d595b84c87f6d87f2aff893c291cf02c6ccd9268cebab5799fe2: Status 404 returned error can't find the container with id c20e11167610d595b84c87f6d87f2aff893c291cf02c6ccd9268cebab5799fe2 Feb 01 07:35:36 crc kubenswrapper[4835]: I0201 07:35:36.398570 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" event={"ID":"b76bd603-252c-4c26-a1c7-0009be5661be","Type":"ContainerStarted","Data":"c20e11167610d595b84c87f6d87f2aff893c291cf02c6ccd9268cebab5799fe2"} Feb 01 07:35:39 crc kubenswrapper[4835]: I0201 07:35:39.424580 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" event={"ID":"b76bd603-252c-4c26-a1c7-0009be5661be","Type":"ContainerStarted","Data":"5eccc636e49f64cb1c17047d447a67c1b14712efb95f7605cd69bf445160c6d7"} Feb 01 07:35:39 crc kubenswrapper[4835]: I0201 07:35:39.456793 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" podStartSLOduration=1.441646446 podStartE2EDuration="4.456759651s" podCreationTimestamp="2026-02-01 07:35:35 +0000 UTC" firstStartedPulling="2026-02-01 07:35:36.125652343 +0000 UTC m=+809.246088777" lastFinishedPulling="2026-02-01 07:35:39.140765498 +0000 UTC m=+812.261201982" observedRunningTime="2026-02-01 07:35:39.446030438 +0000 UTC m=+812.566466902" watchObservedRunningTime="2026-02-01 07:35:39.456759651 +0000 UTC m=+812.577196125" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.731834 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.733660 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.736865 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-server-dockercfg-ztvxx" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.737088 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-default-user" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.737190 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-erlang-cookie" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.737280 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"rabbitmq-plugins-conf" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.737227 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"rabbitmq-server-conf" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.762062 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.796995 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.797046 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.797077 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.797096 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.797144 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.797160 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k9mm\" (UniqueName: \"kubernetes.io/projected/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-kube-api-access-9k9mm\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.797195 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.797232 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898509 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898577 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898622 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898658 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k9mm\" (UniqueName: \"kubernetes.io/projected/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-kube-api-access-9k9mm\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898725 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898799 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898871 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898922 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.900649 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.900695 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.901033 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.901964 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.902033 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fe67dcb5fd9741690176c772121471f4cbb81a238dd7982ba8fc34196e18fb2b/globalmount\"" pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.909069 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.911948 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.920370 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.922040 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k9mm\" (UniqueName: \"kubernetes.io/projected/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-kube-api-access-9k9mm\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.934879 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.062805 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.541467 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Feb 01 07:35:42 crc kubenswrapper[4835]: W0201 07:35:42.554010 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e38bb1_d3dc_46d8_8b2d_8cc583a0a70e.slice/crio-97eecd711505cc5e999b9ae04d7f8884fe5fbf848cb06ab0e2d678fd57c85861 WatchSource:0}: Error finding container 97eecd711505cc5e999b9ae04d7f8884fe5fbf848cb06ab0e2d678fd57c85861: Status 404 returned error can't find the container with id 97eecd711505cc5e999b9ae04d7f8884fe5fbf848cb06ab0e2d678fd57c85861 Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.714683 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-6hv5l"] Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.715475 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.718439 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-pq6mc" Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.734108 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-6hv5l"] Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.810930 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwlzx\" (UniqueName: \"kubernetes.io/projected/09002d70-8878-4f31-bc75-ddf7378a8564-kube-api-access-fwlzx\") pod \"keystone-operator-index-6hv5l\" (UID: \"09002d70-8878-4f31-bc75-ddf7378a8564\") " pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.912065 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwlzx\" (UniqueName: \"kubernetes.io/projected/09002d70-8878-4f31-bc75-ddf7378a8564-kube-api-access-fwlzx\") pod \"keystone-operator-index-6hv5l\" (UID: \"09002d70-8878-4f31-bc75-ddf7378a8564\") " pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.942144 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwlzx\" (UniqueName: \"kubernetes.io/projected/09002d70-8878-4f31-bc75-ddf7378a8564-kube-api-access-fwlzx\") pod \"keystone-operator-index-6hv5l\" (UID: \"09002d70-8878-4f31-bc75-ddf7378a8564\") " pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:43 crc kubenswrapper[4835]: I0201 07:35:43.040081 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:43 crc kubenswrapper[4835]: I0201 07:35:43.457664 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e","Type":"ContainerStarted","Data":"97eecd711505cc5e999b9ae04d7f8884fe5fbf848cb06ab0e2d678fd57c85861"} Feb 01 07:35:43 crc kubenswrapper[4835]: I0201 07:35:43.465338 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-6hv5l"] Feb 01 07:35:43 crc kubenswrapper[4835]: W0201 07:35:43.476587 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09002d70_8878_4f31_bc75_ddf7378a8564.slice/crio-5106841f6f36b31582e2023703f39b197a96342d9ad65d9a77171b5d90a1c805 WatchSource:0}: Error finding container 5106841f6f36b31582e2023703f39b197a96342d9ad65d9a77171b5d90a1c805: Status 404 returned error can't find the container with id 5106841f6f36b31582e2023703f39b197a96342d9ad65d9a77171b5d90a1c805 Feb 01 07:35:44 crc kubenswrapper[4835]: I0201 07:35:44.463678 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-6hv5l" event={"ID":"09002d70-8878-4f31-bc75-ddf7378a8564","Type":"ContainerStarted","Data":"5106841f6f36b31582e2023703f39b197a96342d9ad65d9a77171b5d90a1c805"} Feb 01 07:35:49 crc kubenswrapper[4835]: I0201 07:35:49.514583 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-6hv5l" event={"ID":"09002d70-8878-4f31-bc75-ddf7378a8564","Type":"ContainerStarted","Data":"e73188334385a7e0e320e25ff5d163c112dac7b4f08f979d16347d097b566b46"} Feb 01 07:35:49 crc kubenswrapper[4835]: I0201 07:35:49.546319 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-6hv5l" podStartSLOduration=3.527454895 podStartE2EDuration="7.546289285s" podCreationTimestamp="2026-02-01 07:35:42 +0000 UTC" firstStartedPulling="2026-02-01 07:35:43.477829088 +0000 UTC m=+816.598265522" lastFinishedPulling="2026-02-01 07:35:47.496663478 +0000 UTC m=+820.617099912" observedRunningTime="2026-02-01 07:35:49.534665929 +0000 UTC m=+822.655102393" watchObservedRunningTime="2026-02-01 07:35:49.546289285 +0000 UTC m=+822.666725759" Feb 01 07:35:50 crc kubenswrapper[4835]: I0201 07:35:50.525487 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e","Type":"ContainerStarted","Data":"247ffa054aae7a8b1b3224a16b77460f26fe6817a4d71d43837f34ade749792d"} Feb 01 07:35:53 crc kubenswrapper[4835]: I0201 07:35:53.041094 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:53 crc kubenswrapper[4835]: I0201 07:35:53.041577 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:53 crc kubenswrapper[4835]: I0201 07:35:53.083271 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:53 crc kubenswrapper[4835]: I0201 07:35:53.596189 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:54 crc kubenswrapper[4835]: I0201 07:35:54.978950 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm"] Feb 01 07:35:54 crc kubenswrapper[4835]: I0201 07:35:54.981565 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:54 crc kubenswrapper[4835]: I0201 07:35:54.984225 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j4xxm" Feb 01 07:35:54 crc kubenswrapper[4835]: I0201 07:35:54.989380 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm"] Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.143916 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.144174 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l549n\" (UniqueName: \"kubernetes.io/projected/667e6752-afe4-4918-9457-57c5eb1a6aae-kube-api-access-l549n\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.144297 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.245977 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.246062 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.246114 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l549n\" (UniqueName: \"kubernetes.io/projected/667e6752-afe4-4918-9457-57c5eb1a6aae-kube-api-access-l549n\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.246873 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.247013 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.268887 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l549n\" (UniqueName: \"kubernetes.io/projected/667e6752-afe4-4918-9457-57c5eb1a6aae-kube-api-access-l549n\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.314720 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.787838 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm"] Feb 01 07:35:56 crc kubenswrapper[4835]: I0201 07:35:56.573838 4835 generic.go:334] "Generic (PLEG): container finished" podID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerID="72bbaa515813b901a7d0ad68680c4decc5ce25b465f61b3ac1d95201f3bbc5ee" exitCode=0 Feb 01 07:35:56 crc kubenswrapper[4835]: I0201 07:35:56.574077 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" event={"ID":"667e6752-afe4-4918-9457-57c5eb1a6aae","Type":"ContainerDied","Data":"72bbaa515813b901a7d0ad68680c4decc5ce25b465f61b3ac1d95201f3bbc5ee"} Feb 01 07:35:56 crc kubenswrapper[4835]: I0201 07:35:56.574236 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" event={"ID":"667e6752-afe4-4918-9457-57c5eb1a6aae","Type":"ContainerStarted","Data":"6e9a23f5045cd6097995370de4c45763374c11683dd08b2135f996ba056f9f60"} Feb 01 07:35:57 crc kubenswrapper[4835]: I0201 07:35:57.582664 4835 generic.go:334] "Generic (PLEG): container finished" podID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerID="793805b90b326aac75f9791b51156de1e873292c5f40bba477f6fd0cdfe721a4" exitCode=0 Feb 01 07:35:57 crc kubenswrapper[4835]: I0201 07:35:57.582867 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" event={"ID":"667e6752-afe4-4918-9457-57c5eb1a6aae","Type":"ContainerDied","Data":"793805b90b326aac75f9791b51156de1e873292c5f40bba477f6fd0cdfe721a4"} Feb 01 07:35:58 crc kubenswrapper[4835]: I0201 07:35:58.597241 4835 generic.go:334] "Generic (PLEG): container finished" podID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerID="f91f6df3e3f1b5820feba7a26c52eece27a49db37c1bb83bc096d1bffa51331d" exitCode=0 Feb 01 07:35:58 crc kubenswrapper[4835]: I0201 07:35:58.597308 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" event={"ID":"667e6752-afe4-4918-9457-57c5eb1a6aae","Type":"ContainerDied","Data":"f91f6df3e3f1b5820feba7a26c52eece27a49db37c1bb83bc096d1bffa51331d"} Feb 01 07:35:59 crc kubenswrapper[4835]: I0201 07:35:59.940704 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.016548 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-bundle\") pod \"667e6752-afe4-4918-9457-57c5eb1a6aae\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.017101 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l549n\" (UniqueName: \"kubernetes.io/projected/667e6752-afe4-4918-9457-57c5eb1a6aae-kube-api-access-l549n\") pod \"667e6752-afe4-4918-9457-57c5eb1a6aae\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.017139 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-util\") pod \"667e6752-afe4-4918-9457-57c5eb1a6aae\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.018169 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-bundle" (OuterVolumeSpecName: "bundle") pod "667e6752-afe4-4918-9457-57c5eb1a6aae" (UID: "667e6752-afe4-4918-9457-57c5eb1a6aae"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.023796 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/667e6752-afe4-4918-9457-57c5eb1a6aae-kube-api-access-l549n" (OuterVolumeSpecName: "kube-api-access-l549n") pod "667e6752-afe4-4918-9457-57c5eb1a6aae" (UID: "667e6752-afe4-4918-9457-57c5eb1a6aae"). InnerVolumeSpecName "kube-api-access-l549n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.049555 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-util" (OuterVolumeSpecName: "util") pod "667e6752-afe4-4918-9457-57c5eb1a6aae" (UID: "667e6752-afe4-4918-9457-57c5eb1a6aae"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.118813 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l549n\" (UniqueName: \"kubernetes.io/projected/667e6752-afe4-4918-9457-57c5eb1a6aae-kube-api-access-l549n\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.118871 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-util\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.118894 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.614752 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" event={"ID":"667e6752-afe4-4918-9457-57c5eb1a6aae","Type":"ContainerDied","Data":"6e9a23f5045cd6097995370de4c45763374c11683dd08b2135f996ba056f9f60"} Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.615068 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e9a23f5045cd6097995370de4c45763374c11683dd08b2135f996ba056f9f60" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.614865 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.074246 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4"] Feb 01 07:36:12 crc kubenswrapper[4835]: E0201 07:36:12.075076 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerName="extract" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.075093 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerName="extract" Feb 01 07:36:12 crc kubenswrapper[4835]: E0201 07:36:12.075108 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerName="util" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.075117 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerName="util" Feb 01 07:36:12 crc kubenswrapper[4835]: E0201 07:36:12.075146 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerName="pull" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.075155 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerName="pull" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.075300 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerName="extract" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.075819 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.077995 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.078165 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-k9cc8" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.083243 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4"] Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.246063 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-apiservice-cert\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.246119 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjp6q\" (UniqueName: \"kubernetes.io/projected/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-kube-api-access-vjp6q\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.246169 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-webhook-cert\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.347245 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-webhook-cert\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.347452 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-apiservice-cert\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.347502 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjp6q\" (UniqueName: \"kubernetes.io/projected/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-kube-api-access-vjp6q\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.353134 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-apiservice-cert\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.360310 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-webhook-cert\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.367013 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjp6q\" (UniqueName: \"kubernetes.io/projected/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-kube-api-access-vjp6q\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.395682 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.819715 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4"] Feb 01 07:36:12 crc kubenswrapper[4835]: W0201 07:36:12.830689 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84eb5c79_bae7_43b3_9b04_c949dc8c5ec4.slice/crio-7d9909bfe9dd457bb7ae9753ba46c183780f958b7d714c3d260bf4a705b2cde4 WatchSource:0}: Error finding container 7d9909bfe9dd457bb7ae9753ba46c183780f958b7d714c3d260bf4a705b2cde4: Status 404 returned error can't find the container with id 7d9909bfe9dd457bb7ae9753ba46c183780f958b7d714c3d260bf4a705b2cde4 Feb 01 07:36:13 crc kubenswrapper[4835]: I0201 07:36:13.711627 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" event={"ID":"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4","Type":"ContainerStarted","Data":"7d9909bfe9dd457bb7ae9753ba46c183780f958b7d714c3d260bf4a705b2cde4"} Feb 01 07:36:16 crc kubenswrapper[4835]: I0201 07:36:16.731704 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" event={"ID":"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4","Type":"ContainerStarted","Data":"209c2f8a7171f51cfdfc041099d5340638a4d272f8bd3a5c8320542fb7cb27f0"} Feb 01 07:36:16 crc kubenswrapper[4835]: I0201 07:36:16.732203 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:16 crc kubenswrapper[4835]: I0201 07:36:16.762605 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" podStartSLOduration=1.340389298 podStartE2EDuration="4.762581123s" podCreationTimestamp="2026-02-01 07:36:12 +0000 UTC" firstStartedPulling="2026-02-01 07:36:12.835499839 +0000 UTC m=+845.955936313" lastFinishedPulling="2026-02-01 07:36:16.257691704 +0000 UTC m=+849.378128138" observedRunningTime="2026-02-01 07:36:16.758014794 +0000 UTC m=+849.878451288" watchObservedRunningTime="2026-02-01 07:36:16.762581123 +0000 UTC m=+849.883017567" Feb 01 07:36:22 crc kubenswrapper[4835]: I0201 07:36:22.401877 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:22 crc kubenswrapper[4835]: I0201 07:36:22.775221 4835 generic.go:334] "Generic (PLEG): container finished" podID="34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e" containerID="247ffa054aae7a8b1b3224a16b77460f26fe6817a4d71d43837f34ade749792d" exitCode=0 Feb 01 07:36:22 crc kubenswrapper[4835]: I0201 07:36:22.775300 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e","Type":"ContainerDied","Data":"247ffa054aae7a8b1b3224a16b77460f26fe6817a4d71d43837f34ade749792d"} Feb 01 07:36:23 crc kubenswrapper[4835]: I0201 07:36:23.784374 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e","Type":"ContainerStarted","Data":"19e8242448e511e78c6b154dd37c8b1a43d6098db208e09f1d7e0ef72e64e253"} Feb 01 07:36:23 crc kubenswrapper[4835]: I0201 07:36:23.785196 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:36:23 crc kubenswrapper[4835]: I0201 07:36:23.811637 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/rabbitmq-server-0" podStartSLOduration=37.984108617 podStartE2EDuration="43.811617155s" podCreationTimestamp="2026-02-01 07:35:40 +0000 UTC" firstStartedPulling="2026-02-01 07:35:42.557114048 +0000 UTC m=+815.677550472" lastFinishedPulling="2026-02-01 07:35:48.384622536 +0000 UTC m=+821.505059010" observedRunningTime="2026-02-01 07:36:23.805078965 +0000 UTC m=+856.925515399" watchObservedRunningTime="2026-02-01 07:36:23.811617155 +0000 UTC m=+856.932053589" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.111189 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-d22d-account-create-update-clkrg"] Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.112337 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.115963 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-db-create-m9js9"] Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.116991 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.118571 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-db-secret" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.132922 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-create-m9js9"] Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.145785 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-d22d-account-create-update-clkrg"] Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.262983 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f574f591-2220-4cd1-88f7-ac79ac332aae-operator-scripts\") pod \"keystone-db-create-m9js9\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.263249 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnqp4\" (UniqueName: \"kubernetes.io/projected/f574f591-2220-4cd1-88f7-ac79ac332aae-kube-api-access-mnqp4\") pod \"keystone-db-create-m9js9\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.263332 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2cqc\" (UniqueName: \"kubernetes.io/projected/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-kube-api-access-w2cqc\") pod \"keystone-d22d-account-create-update-clkrg\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.263434 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-operator-scripts\") pod \"keystone-d22d-account-create-update-clkrg\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.364649 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2cqc\" (UniqueName: \"kubernetes.io/projected/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-kube-api-access-w2cqc\") pod \"keystone-d22d-account-create-update-clkrg\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.364726 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-operator-scripts\") pod \"keystone-d22d-account-create-update-clkrg\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.364814 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f574f591-2220-4cd1-88f7-ac79ac332aae-operator-scripts\") pod \"keystone-db-create-m9js9\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.364852 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnqp4\" (UniqueName: \"kubernetes.io/projected/f574f591-2220-4cd1-88f7-ac79ac332aae-kube-api-access-mnqp4\") pod \"keystone-db-create-m9js9\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.365682 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f574f591-2220-4cd1-88f7-ac79ac332aae-operator-scripts\") pod \"keystone-db-create-m9js9\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.365748 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-operator-scripts\") pod \"keystone-d22d-account-create-update-clkrg\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.383125 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2cqc\" (UniqueName: \"kubernetes.io/projected/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-kube-api-access-w2cqc\") pod \"keystone-d22d-account-create-update-clkrg\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.385056 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnqp4\" (UniqueName: \"kubernetes.io/projected/f574f591-2220-4cd1-88f7-ac79ac332aae-kube-api-access-mnqp4\") pod \"keystone-db-create-m9js9\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.446770 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.453144 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.034862 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-d22d-account-create-update-clkrg"] Feb 01 07:36:26 crc kubenswrapper[4835]: W0201 07:36:26.042217 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod766b4c0a_da92_4fe7_bf95_4a39f3fafafe.slice/crio-f050955b679b3d41173f8715f8fffb201213503cbe8bc44f4e4442841d5e408c WatchSource:0}: Error finding container f050955b679b3d41173f8715f8fffb201213503cbe8bc44f4e4442841d5e408c: Status 404 returned error can't find the container with id f050955b679b3d41173f8715f8fffb201213503cbe8bc44f4e4442841d5e408c Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.154032 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-create-m9js9"] Feb 01 07:36:26 crc kubenswrapper[4835]: W0201 07:36:26.161405 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf574f591_2220_4cd1_88f7_ac79ac332aae.slice/crio-116ea189d3c24ff88f9f58a9d2d496b057c28fec8050662fa6bda2519ef94929 WatchSource:0}: Error finding container 116ea189d3c24ff88f9f58a9d2d496b057c28fec8050662fa6bda2519ef94929: Status 404 returned error can't find the container with id 116ea189d3c24ff88f9f58a9d2d496b057c28fec8050662fa6bda2519ef94929 Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.801987 4835 generic.go:334] "Generic (PLEG): container finished" podID="766b4c0a-da92-4fe7-bf95-4a39f3fafafe" containerID="215269eb271992c8cbc8e79c691e2434a7dce5223c9258cc1ad2fca20f897f92" exitCode=0 Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.802046 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" event={"ID":"766b4c0a-da92-4fe7-bf95-4a39f3fafafe","Type":"ContainerDied","Data":"215269eb271992c8cbc8e79c691e2434a7dce5223c9258cc1ad2fca20f897f92"} Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.802071 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" event={"ID":"766b4c0a-da92-4fe7-bf95-4a39f3fafafe","Type":"ContainerStarted","Data":"f050955b679b3d41173f8715f8fffb201213503cbe8bc44f4e4442841d5e408c"} Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.803943 4835 generic.go:334] "Generic (PLEG): container finished" podID="f574f591-2220-4cd1-88f7-ac79ac332aae" containerID="fe725302a8ffa5be3e180ac6b253d15da455fbca578acdea4628b374a3cde003" exitCode=0 Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.804023 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-m9js9" event={"ID":"f574f591-2220-4cd1-88f7-ac79ac332aae","Type":"ContainerDied","Data":"fe725302a8ffa5be3e180ac6b253d15da455fbca578acdea4628b374a3cde003"} Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.804070 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-m9js9" event={"ID":"f574f591-2220-4cd1-88f7-ac79ac332aae","Type":"ContainerStarted","Data":"116ea189d3c24ff88f9f58a9d2d496b057c28fec8050662fa6bda2519ef94929"} Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.117242 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-index-fmwqp"] Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.118277 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.120368 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-index-dockercfg-8k4l7" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.140140 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-fmwqp"] Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.237393 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.242226 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.273265 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hsx9\" (UniqueName: \"kubernetes.io/projected/4fa5ae77-daab-43fa-b798-b9895f717e0a-kube-api-access-8hsx9\") pod \"barbican-operator-index-fmwqp\" (UID: \"4fa5ae77-daab-43fa-b798-b9895f717e0a\") " pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.373900 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2cqc\" (UniqueName: \"kubernetes.io/projected/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-kube-api-access-w2cqc\") pod \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.374042 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-operator-scripts\") pod \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.374087 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnqp4\" (UniqueName: \"kubernetes.io/projected/f574f591-2220-4cd1-88f7-ac79ac332aae-kube-api-access-mnqp4\") pod \"f574f591-2220-4cd1-88f7-ac79ac332aae\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.374966 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "766b4c0a-da92-4fe7-bf95-4a39f3fafafe" (UID: "766b4c0a-da92-4fe7-bf95-4a39f3fafafe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.375201 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f574f591-2220-4cd1-88f7-ac79ac332aae-operator-scripts\") pod \"f574f591-2220-4cd1-88f7-ac79ac332aae\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.375623 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f574f591-2220-4cd1-88f7-ac79ac332aae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f574f591-2220-4cd1-88f7-ac79ac332aae" (UID: "f574f591-2220-4cd1-88f7-ac79ac332aae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.375675 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hsx9\" (UniqueName: \"kubernetes.io/projected/4fa5ae77-daab-43fa-b798-b9895f717e0a-kube-api-access-8hsx9\") pod \"barbican-operator-index-fmwqp\" (UID: \"4fa5ae77-daab-43fa-b798-b9895f717e0a\") " pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.376632 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.376667 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f574f591-2220-4cd1-88f7-ac79ac332aae-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.384703 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f574f591-2220-4cd1-88f7-ac79ac332aae-kube-api-access-mnqp4" (OuterVolumeSpecName: "kube-api-access-mnqp4") pod "f574f591-2220-4cd1-88f7-ac79ac332aae" (UID: "f574f591-2220-4cd1-88f7-ac79ac332aae"). InnerVolumeSpecName "kube-api-access-mnqp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.384812 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-kube-api-access-w2cqc" (OuterVolumeSpecName: "kube-api-access-w2cqc") pod "766b4c0a-da92-4fe7-bf95-4a39f3fafafe" (UID: "766b4c0a-da92-4fe7-bf95-4a39f3fafafe"). InnerVolumeSpecName "kube-api-access-w2cqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.399527 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hsx9\" (UniqueName: \"kubernetes.io/projected/4fa5ae77-daab-43fa-b798-b9895f717e0a-kube-api-access-8hsx9\") pod \"barbican-operator-index-fmwqp\" (UID: \"4fa5ae77-daab-43fa-b798-b9895f717e0a\") " pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.435162 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.489833 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2cqc\" (UniqueName: \"kubernetes.io/projected/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-kube-api-access-w2cqc\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.489865 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnqp4\" (UniqueName: \"kubernetes.io/projected/f574f591-2220-4cd1-88f7-ac79ac332aae-kube-api-access-mnqp4\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.818907 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" event={"ID":"766b4c0a-da92-4fe7-bf95-4a39f3fafafe","Type":"ContainerDied","Data":"f050955b679b3d41173f8715f8fffb201213503cbe8bc44f4e4442841d5e408c"} Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.819217 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f050955b679b3d41173f8715f8fffb201213503cbe8bc44f4e4442841d5e408c" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.818926 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.820630 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-m9js9" event={"ID":"f574f591-2220-4cd1-88f7-ac79ac332aae","Type":"ContainerDied","Data":"116ea189d3c24ff88f9f58a9d2d496b057c28fec8050662fa6bda2519ef94929"} Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.820658 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="116ea189d3c24ff88f9f58a9d2d496b057c28fec8050662fa6bda2519ef94929" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.820691 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.910185 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-fmwqp"] Feb 01 07:36:29 crc kubenswrapper[4835]: I0201 07:36:29.833669 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-fmwqp" event={"ID":"4fa5ae77-daab-43fa-b798-b9895f717e0a","Type":"ContainerStarted","Data":"54a03cd57752b9215cc8a2e7918ca730a1757de3e169d7a8117c5684a6058844"} Feb 01 07:36:30 crc kubenswrapper[4835]: I0201 07:36:30.852485 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-fmwqp" event={"ID":"4fa5ae77-daab-43fa-b798-b9895f717e0a","Type":"ContainerStarted","Data":"2f82ecb9b26c9b5db30b43b7b808bf402cb55b491714a5b1ff685deee7aa0a06"} Feb 01 07:36:30 crc kubenswrapper[4835]: I0201 07:36:30.884014 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-index-fmwqp" podStartSLOduration=2.0696765 podStartE2EDuration="2.883979984s" podCreationTimestamp="2026-02-01 07:36:28 +0000 UTC" firstStartedPulling="2026-02-01 07:36:28.920379686 +0000 UTC m=+862.040816120" lastFinishedPulling="2026-02-01 07:36:29.73468317 +0000 UTC m=+862.855119604" observedRunningTime="2026-02-01 07:36:30.8746099 +0000 UTC m=+863.995046344" watchObservedRunningTime="2026-02-01 07:36:30.883979984 +0000 UTC m=+864.004416458" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.067404 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.564250 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-db-sync-5w5sr"] Feb 01 07:36:32 crc kubenswrapper[4835]: E0201 07:36:32.564986 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f574f591-2220-4cd1-88f7-ac79ac332aae" containerName="mariadb-database-create" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.565004 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f574f591-2220-4cd1-88f7-ac79ac332aae" containerName="mariadb-database-create" Feb 01 07:36:32 crc kubenswrapper[4835]: E0201 07:36:32.565035 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766b4c0a-da92-4fe7-bf95-4a39f3fafafe" containerName="mariadb-account-create-update" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.565042 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="766b4c0a-da92-4fe7-bf95-4a39f3fafafe" containerName="mariadb-account-create-update" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.565164 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f574f591-2220-4cd1-88f7-ac79ac332aae" containerName="mariadb-database-create" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.565177 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="766b4c0a-da92-4fe7-bf95-4a39f3fafafe" containerName="mariadb-account-create-update" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.565762 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.567848 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.568053 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.569073 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-hgb5p" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.571402 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.576219 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-5w5sr"] Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.662545 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89qxh\" (UniqueName: \"kubernetes.io/projected/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-kube-api-access-89qxh\") pod \"keystone-db-sync-5w5sr\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.662659 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-config-data\") pod \"keystone-db-sync-5w5sr\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.764383 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89qxh\" (UniqueName: \"kubernetes.io/projected/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-kube-api-access-89qxh\") pod \"keystone-db-sync-5w5sr\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.764837 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-config-data\") pod \"keystone-db-sync-5w5sr\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.770815 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-config-data\") pod \"keystone-db-sync-5w5sr\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.781381 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89qxh\" (UniqueName: \"kubernetes.io/projected/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-kube-api-access-89qxh\") pod \"keystone-db-sync-5w5sr\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.896809 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:33 crc kubenswrapper[4835]: I0201 07:36:33.217760 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-5w5sr"] Feb 01 07:36:33 crc kubenswrapper[4835]: W0201 07:36:33.227292 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd1d09a3_13ff_43c0_835a_de9a6f9b5103.slice/crio-bb07ccc83c8b95c7749afa42417dd2f772f0a6c5857837894045b51d53900cfe WatchSource:0}: Error finding container bb07ccc83c8b95c7749afa42417dd2f772f0a6c5857837894045b51d53900cfe: Status 404 returned error can't find the container with id bb07ccc83c8b95c7749afa42417dd2f772f0a6c5857837894045b51d53900cfe Feb 01 07:36:33 crc kubenswrapper[4835]: I0201 07:36:33.876393 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" event={"ID":"cd1d09a3-13ff-43c0-835a-de9a6f9b5103","Type":"ContainerStarted","Data":"bb07ccc83c8b95c7749afa42417dd2f772f0a6c5857837894045b51d53900cfe"} Feb 01 07:36:38 crc kubenswrapper[4835]: I0201 07:36:38.435650 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:38 crc kubenswrapper[4835]: I0201 07:36:38.436610 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:38 crc kubenswrapper[4835]: I0201 07:36:38.476074 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:38 crc kubenswrapper[4835]: I0201 07:36:38.962603 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:43 crc kubenswrapper[4835]: I0201 07:36:43.987475 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" event={"ID":"cd1d09a3-13ff-43c0-835a-de9a6f9b5103","Type":"ContainerStarted","Data":"a06f9b42349fa2ea28d87918e953134cff78d85714b4da730fc4895d65231d70"} Feb 01 07:36:44 crc kubenswrapper[4835]: I0201 07:36:44.020858 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" podStartSLOduration=1.937893903 podStartE2EDuration="12.020831434s" podCreationTimestamp="2026-02-01 07:36:32 +0000 UTC" firstStartedPulling="2026-02-01 07:36:33.229955385 +0000 UTC m=+866.350391819" lastFinishedPulling="2026-02-01 07:36:43.312892876 +0000 UTC m=+876.433329350" observedRunningTime="2026-02-01 07:36:44.006529782 +0000 UTC m=+877.126966276" watchObservedRunningTime="2026-02-01 07:36:44.020831434 +0000 UTC m=+877.141267908" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.012522 4835 generic.go:334] "Generic (PLEG): container finished" podID="cd1d09a3-13ff-43c0-835a-de9a6f9b5103" containerID="a06f9b42349fa2ea28d87918e953134cff78d85714b4da730fc4895d65231d70" exitCode=0 Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.012656 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" event={"ID":"cd1d09a3-13ff-43c0-835a-de9a6f9b5103","Type":"ContainerDied","Data":"a06f9b42349fa2ea28d87918e953134cff78d85714b4da730fc4895d65231d70"} Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.768984 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf"] Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.771189 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.773716 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j4xxm" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.779502 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf"] Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.859955 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.860052 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.860087 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s48t4\" (UniqueName: \"kubernetes.io/projected/34b15f05-4416-4999-ba8c-3bc64ada7f04-kube-api-access-s48t4\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.961520 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.961724 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.961799 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s48t4\" (UniqueName: \"kubernetes.io/projected/34b15f05-4416-4999-ba8c-3bc64ada7f04-kube-api-access-s48t4\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.962523 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.962550 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.999513 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s48t4\" (UniqueName: \"kubernetes.io/projected/34b15f05-4416-4999-ba8c-3bc64ada7f04-kube-api-access-s48t4\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.142568 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.357069 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.367877 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf"] Feb 01 07:36:48 crc kubenswrapper[4835]: W0201 07:36:48.378343 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34b15f05_4416_4999_ba8c_3bc64ada7f04.slice/crio-c0df356a392eca344c42898289354373cc8f005ff577b910ebcd701d4598b57a WatchSource:0}: Error finding container c0df356a392eca344c42898289354373cc8f005ff577b910ebcd701d4598b57a: Status 404 returned error can't find the container with id c0df356a392eca344c42898289354373cc8f005ff577b910ebcd701d4598b57a Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.468532 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-config-data\") pod \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.468606 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89qxh\" (UniqueName: \"kubernetes.io/projected/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-kube-api-access-89qxh\") pod \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.475250 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-kube-api-access-89qxh" (OuterVolumeSpecName: "kube-api-access-89qxh") pod "cd1d09a3-13ff-43c0-835a-de9a6f9b5103" (UID: "cd1d09a3-13ff-43c0-835a-de9a6f9b5103"). InnerVolumeSpecName "kube-api-access-89qxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.507101 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-config-data" (OuterVolumeSpecName: "config-data") pod "cd1d09a3-13ff-43c0-835a-de9a6f9b5103" (UID: "cd1d09a3-13ff-43c0-835a-de9a6f9b5103"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.570513 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.570580 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89qxh\" (UniqueName: \"kubernetes.io/projected/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-kube-api-access-89qxh\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.037586 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" event={"ID":"cd1d09a3-13ff-43c0-835a-de9a6f9b5103","Type":"ContainerDied","Data":"bb07ccc83c8b95c7749afa42417dd2f772f0a6c5857837894045b51d53900cfe"} Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.037611 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.038225 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb07ccc83c8b95c7749afa42417dd2f772f0a6c5857837894045b51d53900cfe" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.040259 4835 generic.go:334] "Generic (PLEG): container finished" podID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerID="a758f80f79a264f26eed6f223a42becc5edd1638586fb21bac9054e3130e751b" exitCode=0 Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.040326 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" event={"ID":"34b15f05-4416-4999-ba8c-3bc64ada7f04","Type":"ContainerDied","Data":"a758f80f79a264f26eed6f223a42becc5edd1638586fb21bac9054e3130e751b"} Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.040370 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" event={"ID":"34b15f05-4416-4999-ba8c-3bc64ada7f04","Type":"ContainerStarted","Data":"c0df356a392eca344c42898289354373cc8f005ff577b910ebcd701d4598b57a"} Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.228934 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-6pjmn"] Feb 01 07:36:49 crc kubenswrapper[4835]: E0201 07:36:49.229320 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd1d09a3-13ff-43c0-835a-de9a6f9b5103" containerName="keystone-db-sync" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.229341 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd1d09a3-13ff-43c0-835a-de9a6f9b5103" containerName="keystone-db-sync" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.229549 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd1d09a3-13ff-43c0-835a-de9a6f9b5103" containerName="keystone-db-sync" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.230277 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.234065 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.235121 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"osp-secret" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.235526 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.237569 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-hgb5p" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.237954 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.257913 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-6pjmn"] Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.383136 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbt25\" (UniqueName: \"kubernetes.io/projected/bf026661-c9af-420a-8984-f7fbe212e592-kube-api-access-xbt25\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.383279 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-credential-keys\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.383340 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-config-data\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.383522 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-fernet-keys\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.383617 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-scripts\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.484811 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbt25\" (UniqueName: \"kubernetes.io/projected/bf026661-c9af-420a-8984-f7fbe212e592-kube-api-access-xbt25\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.484931 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-credential-keys\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.485004 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-config-data\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.485102 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-fernet-keys\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.485186 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-scripts\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.490960 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-credential-keys\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.491370 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-config-data\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.491925 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-scripts\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.492209 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-fernet-keys\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.509766 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbt25\" (UniqueName: \"kubernetes.io/projected/bf026661-c9af-420a-8984-f7fbe212e592-kube-api-access-xbt25\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.557276 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.816902 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-6pjmn"] Feb 01 07:36:50 crc kubenswrapper[4835]: I0201 07:36:50.048517 4835 generic.go:334] "Generic (PLEG): container finished" podID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerID="5ab4a566f333981e56101c0b8a532c167e6d02046e37b70ae7cd86f9e5074387" exitCode=0 Feb 01 07:36:50 crc kubenswrapper[4835]: I0201 07:36:50.048640 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" event={"ID":"34b15f05-4416-4999-ba8c-3bc64ada7f04","Type":"ContainerDied","Data":"5ab4a566f333981e56101c0b8a532c167e6d02046e37b70ae7cd86f9e5074387"} Feb 01 07:36:50 crc kubenswrapper[4835]: I0201 07:36:50.052142 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" event={"ID":"bf026661-c9af-420a-8984-f7fbe212e592","Type":"ContainerStarted","Data":"eabeabeae4f73ee57a400f521880f710c03aa93decaac629af5189bf021874a3"} Feb 01 07:36:50 crc kubenswrapper[4835]: I0201 07:36:50.052261 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" event={"ID":"bf026661-c9af-420a-8984-f7fbe212e592","Type":"ContainerStarted","Data":"f9a81db13b96f0df74cca3b4f709858369386ac419f8b4c76dd40e96cb1e2a57"} Feb 01 07:36:51 crc kubenswrapper[4835]: I0201 07:36:51.071499 4835 generic.go:334] "Generic (PLEG): container finished" podID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerID="050f1d779236f9d40b785b72ed4086ba32ddc3b81a4f58145ebfbbebb1134455" exitCode=0 Feb 01 07:36:51 crc kubenswrapper[4835]: I0201 07:36:51.072671 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" event={"ID":"34b15f05-4416-4999-ba8c-3bc64ada7f04","Type":"ContainerDied","Data":"050f1d779236f9d40b785b72ed4086ba32ddc3b81a4f58145ebfbbebb1134455"} Feb 01 07:36:51 crc kubenswrapper[4835]: I0201 07:36:51.107359 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" podStartSLOduration=2.107335 podStartE2EDuration="2.107335s" podCreationTimestamp="2026-02-01 07:36:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:36:50.096165827 +0000 UTC m=+883.216602261" watchObservedRunningTime="2026-02-01 07:36:51.107335 +0000 UTC m=+884.227771474" Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.407742 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.433124 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-bundle\") pod \"34b15f05-4416-4999-ba8c-3bc64ada7f04\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.433293 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-util\") pod \"34b15f05-4416-4999-ba8c-3bc64ada7f04\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.433352 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s48t4\" (UniqueName: \"kubernetes.io/projected/34b15f05-4416-4999-ba8c-3bc64ada7f04-kube-api-access-s48t4\") pod \"34b15f05-4416-4999-ba8c-3bc64ada7f04\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.435375 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-bundle" (OuterVolumeSpecName: "bundle") pod "34b15f05-4416-4999-ba8c-3bc64ada7f04" (UID: "34b15f05-4416-4999-ba8c-3bc64ada7f04"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.442593 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b15f05-4416-4999-ba8c-3bc64ada7f04-kube-api-access-s48t4" (OuterVolumeSpecName: "kube-api-access-s48t4") pod "34b15f05-4416-4999-ba8c-3bc64ada7f04" (UID: "34b15f05-4416-4999-ba8c-3bc64ada7f04"). InnerVolumeSpecName "kube-api-access-s48t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.452609 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-util" (OuterVolumeSpecName: "util") pod "34b15f05-4416-4999-ba8c-3bc64ada7f04" (UID: "34b15f05-4416-4999-ba8c-3bc64ada7f04"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.535212 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.535251 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-util\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.535281 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s48t4\" (UniqueName: \"kubernetes.io/projected/34b15f05-4416-4999-ba8c-3bc64ada7f04-kube-api-access-s48t4\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:53 crc kubenswrapper[4835]: I0201 07:36:53.093150 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" event={"ID":"34b15f05-4416-4999-ba8c-3bc64ada7f04","Type":"ContainerDied","Data":"c0df356a392eca344c42898289354373cc8f005ff577b910ebcd701d4598b57a"} Feb 01 07:36:53 crc kubenswrapper[4835]: I0201 07:36:53.093213 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0df356a392eca344c42898289354373cc8f005ff577b910ebcd701d4598b57a" Feb 01 07:36:53 crc kubenswrapper[4835]: I0201 07:36:53.093163 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:53 crc kubenswrapper[4835]: I0201 07:36:53.097825 4835 generic.go:334] "Generic (PLEG): container finished" podID="bf026661-c9af-420a-8984-f7fbe212e592" containerID="eabeabeae4f73ee57a400f521880f710c03aa93decaac629af5189bf021874a3" exitCode=0 Feb 01 07:36:53 crc kubenswrapper[4835]: I0201 07:36:53.097891 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" event={"ID":"bf026661-c9af-420a-8984-f7fbe212e592","Type":"ContainerDied","Data":"eabeabeae4f73ee57a400f521880f710c03aa93decaac629af5189bf021874a3"} Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.425701 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.466629 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-credential-keys\") pod \"bf026661-c9af-420a-8984-f7fbe212e592\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.467003 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-scripts\") pod \"bf026661-c9af-420a-8984-f7fbe212e592\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.467045 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-config-data\") pod \"bf026661-c9af-420a-8984-f7fbe212e592\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.467067 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-fernet-keys\") pod \"bf026661-c9af-420a-8984-f7fbe212e592\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.467140 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbt25\" (UniqueName: \"kubernetes.io/projected/bf026661-c9af-420a-8984-f7fbe212e592-kube-api-access-xbt25\") pod \"bf026661-c9af-420a-8984-f7fbe212e592\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.472756 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf026661-c9af-420a-8984-f7fbe212e592-kube-api-access-xbt25" (OuterVolumeSpecName: "kube-api-access-xbt25") pod "bf026661-c9af-420a-8984-f7fbe212e592" (UID: "bf026661-c9af-420a-8984-f7fbe212e592"). InnerVolumeSpecName "kube-api-access-xbt25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.473082 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bf026661-c9af-420a-8984-f7fbe212e592" (UID: "bf026661-c9af-420a-8984-f7fbe212e592"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.474095 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bf026661-c9af-420a-8984-f7fbe212e592" (UID: "bf026661-c9af-420a-8984-f7fbe212e592"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.475620 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-scripts" (OuterVolumeSpecName: "scripts") pod "bf026661-c9af-420a-8984-f7fbe212e592" (UID: "bf026661-c9af-420a-8984-f7fbe212e592"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.495016 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-config-data" (OuterVolumeSpecName: "config-data") pod "bf026661-c9af-420a-8984-f7fbe212e592" (UID: "bf026661-c9af-420a-8984-f7fbe212e592"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.569734 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbt25\" (UniqueName: \"kubernetes.io/projected/bf026661-c9af-420a-8984-f7fbe212e592-kube-api-access-xbt25\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.569782 4835 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.569800 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.569819 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.569835 4835 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.117534 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" event={"ID":"bf026661-c9af-420a-8984-f7fbe212e592","Type":"ContainerDied","Data":"f9a81db13b96f0df74cca3b4f709858369386ac419f8b4c76dd40e96cb1e2a57"} Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.117591 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9a81db13b96f0df74cca3b4f709858369386ac419f8b4c76dd40e96cb1e2a57" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.117655 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.332461 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-95fb65664-fmplj"] Feb 01 07:36:55 crc kubenswrapper[4835]: E0201 07:36:55.332859 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerName="pull" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.332892 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerName="pull" Feb 01 07:36:55 crc kubenswrapper[4835]: E0201 07:36:55.332940 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerName="extract" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.332954 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerName="extract" Feb 01 07:36:55 crc kubenswrapper[4835]: E0201 07:36:55.332987 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerName="util" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.333005 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerName="util" Feb 01 07:36:55 crc kubenswrapper[4835]: E0201 07:36:55.333029 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf026661-c9af-420a-8984-f7fbe212e592" containerName="keystone-bootstrap" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.333042 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf026661-c9af-420a-8984-f7fbe212e592" containerName="keystone-bootstrap" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.333275 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerName="extract" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.333313 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf026661-c9af-420a-8984-f7fbe212e592" containerName="keystone-bootstrap" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.334036 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.339238 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.339922 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.340240 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.340407 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-hgb5p" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.350440 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-95fb65664-fmplj"] Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.481040 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-config-data\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.481533 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-credential-keys\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.481602 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-fernet-keys\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.481695 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m7hb\" (UniqueName: \"kubernetes.io/projected/99f218fc-86ce-4952-a7cd-4c80a7cfe774-kube-api-access-9m7hb\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.481736 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-scripts\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.583215 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-config-data\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.583878 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-credential-keys\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.584006 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-fernet-keys\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.584096 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m7hb\" (UniqueName: \"kubernetes.io/projected/99f218fc-86ce-4952-a7cd-4c80a7cfe774-kube-api-access-9m7hb\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.584133 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-scripts\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.589129 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-scripts\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.592692 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-fernet-keys\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.593093 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-credential-keys\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.593954 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-config-data\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.627831 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m7hb\" (UniqueName: \"kubernetes.io/projected/99f218fc-86ce-4952-a7cd-4c80a7cfe774-kube-api-access-9m7hb\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.659055 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:56 crc kubenswrapper[4835]: I0201 07:36:56.163279 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-95fb65664-fmplj"] Feb 01 07:36:57 crc kubenswrapper[4835]: I0201 07:36:57.134254 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-95fb65664-fmplj" event={"ID":"99f218fc-86ce-4952-a7cd-4c80a7cfe774","Type":"ContainerStarted","Data":"690776ed1a952a39556bea2de8bcf6435198d5b3c2e1610fcabed2621cb7dc94"} Feb 01 07:36:57 crc kubenswrapper[4835]: I0201 07:36:57.134530 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-95fb65664-fmplj" event={"ID":"99f218fc-86ce-4952-a7cd-4c80a7cfe774","Type":"ContainerStarted","Data":"af968c38f8638debe9c415c87965a2e0d0d002fb31c35b345e8b1bec429487f8"} Feb 01 07:36:57 crc kubenswrapper[4835]: I0201 07:36:57.134548 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:57 crc kubenswrapper[4835]: I0201 07:36:57.177531 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-95fb65664-fmplj" podStartSLOduration=2.177512899 podStartE2EDuration="2.177512899s" podCreationTimestamp="2026-02-01 07:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:36:57.173471314 +0000 UTC m=+890.293907768" watchObservedRunningTime="2026-02-01 07:36:57.177512899 +0000 UTC m=+890.297949343" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.217973 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5"] Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.219142 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.221255 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-service-cert" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.226961 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-cfg6b" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.239761 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5"] Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.373723 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lgfx\" (UniqueName: \"kubernetes.io/projected/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-kube-api-access-5lgfx\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.373796 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-webhook-cert\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.373958 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-apiservice-cert\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.475511 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lgfx\" (UniqueName: \"kubernetes.io/projected/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-kube-api-access-5lgfx\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.475583 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-webhook-cert\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.475619 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-apiservice-cert\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.482032 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-webhook-cert\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.486176 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-apiservice-cert\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.495877 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lgfx\" (UniqueName: \"kubernetes.io/projected/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-kube-api-access-5lgfx\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.541465 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-cfg6b" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.550011 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:08 crc kubenswrapper[4835]: I0201 07:37:08.045024 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5"] Feb 01 07:37:08 crc kubenswrapper[4835]: W0201 07:37:08.060652 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2562b9ca_8a8f_4a90_8e8f_fd3e4b235603.slice/crio-64607037233e18f4e976bbc187db317c6ea483ff48896787cedb260a1d41a2ac WatchSource:0}: Error finding container 64607037233e18f4e976bbc187db317c6ea483ff48896787cedb260a1d41a2ac: Status 404 returned error can't find the container with id 64607037233e18f4e976bbc187db317c6ea483ff48896787cedb260a1d41a2ac Feb 01 07:37:08 crc kubenswrapper[4835]: I0201 07:37:08.217383 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" event={"ID":"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603","Type":"ContainerStarted","Data":"64607037233e18f4e976bbc187db317c6ea483ff48896787cedb260a1d41a2ac"} Feb 01 07:37:10 crc kubenswrapper[4835]: I0201 07:37:10.233928 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" event={"ID":"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603","Type":"ContainerStarted","Data":"7c6b9db5255affd468e01ac18d8fa746d09be373f766dfcd52f131bc3d21f610"} Feb 01 07:37:10 crc kubenswrapper[4835]: I0201 07:37:10.235327 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:10 crc kubenswrapper[4835]: I0201 07:37:10.254694 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" podStartSLOduration=1.361429188 podStartE2EDuration="3.254675647s" podCreationTimestamp="2026-02-01 07:37:07 +0000 UTC" firstStartedPulling="2026-02-01 07:37:08.062917916 +0000 UTC m=+901.183354350" lastFinishedPulling="2026-02-01 07:37:09.956164335 +0000 UTC m=+903.076600809" observedRunningTime="2026-02-01 07:37:10.251877224 +0000 UTC m=+903.372313658" watchObservedRunningTime="2026-02-01 07:37:10.254675647 +0000 UTC m=+903.375112101" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.525215 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vpzbj"] Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.528139 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.552259 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpzbj"] Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.593268 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srpqr\" (UniqueName: \"kubernetes.io/projected/619d1e1e-0c68-4844-86de-2e62153f4f43-kube-api-access-srpqr\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.593357 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-utilities\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.593390 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-catalog-content\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.697468 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srpqr\" (UniqueName: \"kubernetes.io/projected/619d1e1e-0c68-4844-86de-2e62153f4f43-kube-api-access-srpqr\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.697580 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-utilities\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.697620 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-catalog-content\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.698400 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-catalog-content\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.699335 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-utilities\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.732506 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srpqr\" (UniqueName: \"kubernetes.io/projected/619d1e1e-0c68-4844-86de-2e62153f4f43-kube-api-access-srpqr\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.867668 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:15 crc kubenswrapper[4835]: I0201 07:37:15.354963 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpzbj"] Feb 01 07:37:15 crc kubenswrapper[4835]: W0201 07:37:15.362466 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod619d1e1e_0c68_4844_86de_2e62153f4f43.slice/crio-1265b6bb64ba3f48c50d80f4e805869952e6bb5b9b995bee99530e9f77977489 WatchSource:0}: Error finding container 1265b6bb64ba3f48c50d80f4e805869952e6bb5b9b995bee99530e9f77977489: Status 404 returned error can't find the container with id 1265b6bb64ba3f48c50d80f4e805869952e6bb5b9b995bee99530e9f77977489 Feb 01 07:37:16 crc kubenswrapper[4835]: I0201 07:37:16.283287 4835 generic.go:334] "Generic (PLEG): container finished" podID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerID="e7201f59379d2bead3c65bd0afefdb43d2476d1d60b92329b0df28725c4698f2" exitCode=0 Feb 01 07:37:16 crc kubenswrapper[4835]: I0201 07:37:16.283461 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpzbj" event={"ID":"619d1e1e-0c68-4844-86de-2e62153f4f43","Type":"ContainerDied","Data":"e7201f59379d2bead3c65bd0afefdb43d2476d1d60b92329b0df28725c4698f2"} Feb 01 07:37:16 crc kubenswrapper[4835]: I0201 07:37:16.284674 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpzbj" event={"ID":"619d1e1e-0c68-4844-86de-2e62153f4f43","Type":"ContainerStarted","Data":"1265b6bb64ba3f48c50d80f4e805869952e6bb5b9b995bee99530e9f77977489"} Feb 01 07:37:16 crc kubenswrapper[4835]: I0201 07:37:16.286232 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 07:37:17 crc kubenswrapper[4835]: I0201 07:37:17.297166 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpzbj" event={"ID":"619d1e1e-0c68-4844-86de-2e62153f4f43","Type":"ContainerStarted","Data":"9f2a1ab62add49cdff4bda274340a783a0c8dd6e31e40a7b2352055515842834"} Feb 01 07:37:17 crc kubenswrapper[4835]: I0201 07:37:17.557049 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:18 crc kubenswrapper[4835]: I0201 07:37:18.307586 4835 generic.go:334] "Generic (PLEG): container finished" podID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerID="9f2a1ab62add49cdff4bda274340a783a0c8dd6e31e40a7b2352055515842834" exitCode=0 Feb 01 07:37:18 crc kubenswrapper[4835]: I0201 07:37:18.307651 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpzbj" event={"ID":"619d1e1e-0c68-4844-86de-2e62153f4f43","Type":"ContainerDied","Data":"9f2a1ab62add49cdff4bda274340a783a0c8dd6e31e40a7b2352055515842834"} Feb 01 07:37:19 crc kubenswrapper[4835]: I0201 07:37:19.317563 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpzbj" event={"ID":"619d1e1e-0c68-4844-86de-2e62153f4f43","Type":"ContainerStarted","Data":"53b1ebf6d0ef8776fb635aec1bcb95829748e55cb2ae9101cecf48766cb03ce7"} Feb 01 07:37:19 crc kubenswrapper[4835]: I0201 07:37:19.335739 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vpzbj" podStartSLOduration=2.880756722 podStartE2EDuration="5.335720116s" podCreationTimestamp="2026-02-01 07:37:14 +0000 UTC" firstStartedPulling="2026-02-01 07:37:16.285707999 +0000 UTC m=+909.406144473" lastFinishedPulling="2026-02-01 07:37:18.740671393 +0000 UTC m=+911.861107867" observedRunningTime="2026-02-01 07:37:19.333862918 +0000 UTC m=+912.454299362" watchObservedRunningTime="2026-02-01 07:37:19.335720116 +0000 UTC m=+912.456156560" Feb 01 07:37:24 crc kubenswrapper[4835]: I0201 07:37:24.868065 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:24 crc kubenswrapper[4835]: I0201 07:37:24.868644 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:24 crc kubenswrapper[4835]: I0201 07:37:24.925525 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.192449 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.192530 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.442183 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.455663 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-db-create-ddqhc"] Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.456486 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.465179 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26692abf-b5f8-4461-992d-508cb9b73bb2-operator-scripts\") pod \"barbican-db-create-ddqhc\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.465253 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82hj7\" (UniqueName: \"kubernetes.io/projected/26692abf-b5f8-4461-992d-508cb9b73bb2-kube-api-access-82hj7\") pod \"barbican-db-create-ddqhc\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.472040 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-create-ddqhc"] Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.538760 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w65gv"] Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.540279 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.553658 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w65gv"] Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.564807 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv"] Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.566175 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26692abf-b5f8-4461-992d-508cb9b73bb2-operator-scripts\") pod \"barbican-db-create-ddqhc\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.566209 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82hj7\" (UniqueName: \"kubernetes.io/projected/26692abf-b5f8-4461-992d-508cb9b73bb2-kube-api-access-82hj7\") pod \"barbican-db-create-ddqhc\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.566259 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24svr\" (UniqueName: \"kubernetes.io/projected/7f1e8788-786f-4f9d-b492-3a036764b28d-kube-api-access-24svr\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.566289 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1e8788-786f-4f9d-b492-3a036764b28d-catalog-content\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.566311 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1e8788-786f-4f9d-b492-3a036764b28d-utilities\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.567611 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26692abf-b5f8-4461-992d-508cb9b73bb2-operator-scripts\") pod \"barbican-db-create-ddqhc\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.568179 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.570801 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-db-secret" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.593015 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv"] Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.600085 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82hj7\" (UniqueName: \"kubernetes.io/projected/26692abf-b5f8-4461-992d-508cb9b73bb2-kube-api-access-82hj7\") pod \"barbican-db-create-ddqhc\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.667028 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h55rt\" (UniqueName: \"kubernetes.io/projected/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-kube-api-access-h55rt\") pod \"barbican-2ff5-account-create-update-9hbgv\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.667098 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-operator-scripts\") pod \"barbican-2ff5-account-create-update-9hbgv\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.667319 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24svr\" (UniqueName: \"kubernetes.io/projected/7f1e8788-786f-4f9d-b492-3a036764b28d-kube-api-access-24svr\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.667437 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1e8788-786f-4f9d-b492-3a036764b28d-catalog-content\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.667479 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1e8788-786f-4f9d-b492-3a036764b28d-utilities\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.667908 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1e8788-786f-4f9d-b492-3a036764b28d-catalog-content\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.667957 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1e8788-786f-4f9d-b492-3a036764b28d-utilities\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.700078 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24svr\" (UniqueName: \"kubernetes.io/projected/7f1e8788-786f-4f9d-b492-3a036764b28d-kube-api-access-24svr\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.768333 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-operator-scripts\") pod \"barbican-2ff5-account-create-update-9hbgv\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.768454 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h55rt\" (UniqueName: \"kubernetes.io/projected/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-kube-api-access-h55rt\") pod \"barbican-2ff5-account-create-update-9hbgv\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.769029 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-operator-scripts\") pod \"barbican-2ff5-account-create-update-9hbgv\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.784999 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h55rt\" (UniqueName: \"kubernetes.io/projected/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-kube-api-access-h55rt\") pod \"barbican-2ff5-account-create-update-9hbgv\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.831042 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.852331 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.884886 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:26 crc kubenswrapper[4835]: I0201 07:37:26.284755 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w65gv"] Feb 01 07:37:26 crc kubenswrapper[4835]: I0201 07:37:26.314418 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv"] Feb 01 07:37:26 crc kubenswrapper[4835]: I0201 07:37:26.371514 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-create-ddqhc"] Feb 01 07:37:26 crc kubenswrapper[4835]: I0201 07:37:26.398722 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" event={"ID":"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0","Type":"ContainerStarted","Data":"5fd895ce994f67bbc723f4f973be658c7771a96994f15c9d7d69a9d632d3cac3"} Feb 01 07:37:26 crc kubenswrapper[4835]: I0201 07:37:26.405758 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w65gv" event={"ID":"7f1e8788-786f-4f9d-b492-3a036764b28d","Type":"ContainerStarted","Data":"f87ab5c4674c0034adb701c72c950d4ef6c4f3fd07b22b504ababb012b02a61e"} Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.041317 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.418561 4835 generic.go:334] "Generic (PLEG): container finished" podID="545f3a5d-c02e-45f2-aba5-ea50bf4fccd0" containerID="212958e93fcbd8f3fdf3afad7d233490e91ef9f2cf2380e3ac58f8cc1722a0b6" exitCode=0 Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.418776 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" event={"ID":"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0","Type":"ContainerDied","Data":"212958e93fcbd8f3fdf3afad7d233490e91ef9f2cf2380e3ac58f8cc1722a0b6"} Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.421475 4835 generic.go:334] "Generic (PLEG): container finished" podID="7f1e8788-786f-4f9d-b492-3a036764b28d" containerID="49b373fec160f5cd6ed7a7b91abccb255e85b7dd2f70bd40f149249b995f3798" exitCode=0 Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.421565 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w65gv" event={"ID":"7f1e8788-786f-4f9d-b492-3a036764b28d","Type":"ContainerDied","Data":"49b373fec160f5cd6ed7a7b91abccb255e85b7dd2f70bd40f149249b995f3798"} Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.425864 4835 generic.go:334] "Generic (PLEG): container finished" podID="26692abf-b5f8-4461-992d-508cb9b73bb2" containerID="65cf85b1dd72d5635988e485f041129154e6406263a9f9918622bbd9bb651c81" exitCode=0 Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.425909 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-ddqhc" event={"ID":"26692abf-b5f8-4461-992d-508cb9b73bb2","Type":"ContainerDied","Data":"65cf85b1dd72d5635988e485f041129154e6406263a9f9918622bbd9bb651c81"} Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.425928 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-ddqhc" event={"ID":"26692abf-b5f8-4461-992d-508cb9b73bb2","Type":"ContainerStarted","Data":"d01e4179e0c583283545d6ed590e773396c25e1c03cb1cefcbe0609190b9a7b4"} Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.880324 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.883500 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.920629 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-operator-scripts\") pod \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.920727 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26692abf-b5f8-4461-992d-508cb9b73bb2-operator-scripts\") pod \"26692abf-b5f8-4461-992d-508cb9b73bb2\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.920760 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82hj7\" (UniqueName: \"kubernetes.io/projected/26692abf-b5f8-4461-992d-508cb9b73bb2-kube-api-access-82hj7\") pod \"26692abf-b5f8-4461-992d-508cb9b73bb2\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.920812 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h55rt\" (UniqueName: \"kubernetes.io/projected/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-kube-api-access-h55rt\") pod \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.922066 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26692abf-b5f8-4461-992d-508cb9b73bb2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26692abf-b5f8-4461-992d-508cb9b73bb2" (UID: "26692abf-b5f8-4461-992d-508cb9b73bb2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.922176 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "545f3a5d-c02e-45f2-aba5-ea50bf4fccd0" (UID: "545f3a5d-c02e-45f2-aba5-ea50bf4fccd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.928039 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-kube-api-access-h55rt" (OuterVolumeSpecName: "kube-api-access-h55rt") pod "545f3a5d-c02e-45f2-aba5-ea50bf4fccd0" (UID: "545f3a5d-c02e-45f2-aba5-ea50bf4fccd0"). InnerVolumeSpecName "kube-api-access-h55rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.928625 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26692abf-b5f8-4461-992d-508cb9b73bb2-kube-api-access-82hj7" (OuterVolumeSpecName: "kube-api-access-82hj7") pod "26692abf-b5f8-4461-992d-508cb9b73bb2" (UID: "26692abf-b5f8-4461-992d-508cb9b73bb2"). InnerVolumeSpecName "kube-api-access-82hj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.024036 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h55rt\" (UniqueName: \"kubernetes.io/projected/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-kube-api-access-h55rt\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.024071 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.024080 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26692abf-b5f8-4461-992d-508cb9b73bb2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.024089 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82hj7\" (UniqueName: \"kubernetes.io/projected/26692abf-b5f8-4461-992d-508cb9b73bb2-kube-api-access-82hj7\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.449459 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" event={"ID":"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0","Type":"ContainerDied","Data":"5fd895ce994f67bbc723f4f973be658c7771a96994f15c9d7d69a9d632d3cac3"} Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.449512 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fd895ce994f67bbc723f4f973be658c7771a96994f15c9d7d69a9d632d3cac3" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.449474 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.451457 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-ddqhc" event={"ID":"26692abf-b5f8-4461-992d-508cb9b73bb2","Type":"ContainerDied","Data":"d01e4179e0c583283545d6ed590e773396c25e1c03cb1cefcbe0609190b9a7b4"} Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.451496 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d01e4179e0c583283545d6ed590e773396c25e1c03cb1cefcbe0609190b9a7b4" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.451545 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.519012 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-tj2nn"] Feb 01 07:37:29 crc kubenswrapper[4835]: E0201 07:37:29.519326 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26692abf-b5f8-4461-992d-508cb9b73bb2" containerName="mariadb-database-create" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.519343 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="26692abf-b5f8-4461-992d-508cb9b73bb2" containerName="mariadb-database-create" Feb 01 07:37:29 crc kubenswrapper[4835]: E0201 07:37:29.519364 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="545f3a5d-c02e-45f2-aba5-ea50bf4fccd0" containerName="mariadb-account-create-update" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.519371 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="545f3a5d-c02e-45f2-aba5-ea50bf4fccd0" containerName="mariadb-account-create-update" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.519551 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="545f3a5d-c02e-45f2-aba5-ea50bf4fccd0" containerName="mariadb-account-create-update" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.519566 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="26692abf-b5f8-4461-992d-508cb9b73bb2" containerName="mariadb-database-create" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.520152 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.524982 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-index-dockercfg-j5f24" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.531341 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm2qx\" (UniqueName: \"kubernetes.io/projected/ebf9c948-3fde-47f0-aa35-856193c1a275-kube-api-access-hm2qx\") pod \"swift-operator-index-tj2nn\" (UID: \"ebf9c948-3fde-47f0-aa35-856193c1a275\") " pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.549941 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-tj2nn"] Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.632838 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm2qx\" (UniqueName: \"kubernetes.io/projected/ebf9c948-3fde-47f0-aa35-856193c1a275-kube-api-access-hm2qx\") pod \"swift-operator-index-tj2nn\" (UID: \"ebf9c948-3fde-47f0-aa35-856193c1a275\") " pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.651814 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm2qx\" (UniqueName: \"kubernetes.io/projected/ebf9c948-3fde-47f0-aa35-856193c1a275-kube-api-access-hm2qx\") pod \"swift-operator-index-tj2nn\" (UID: \"ebf9c948-3fde-47f0-aa35-856193c1a275\") " pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.708305 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpzbj"] Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.714346 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vpzbj" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="registry-server" containerID="cri-o://53b1ebf6d0ef8776fb635aec1bcb95829748e55cb2ae9101cecf48766cb03ce7" gracePeriod=2 Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.853903 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:30 crc kubenswrapper[4835]: I0201 07:37:30.461908 4835 generic.go:334] "Generic (PLEG): container finished" podID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerID="53b1ebf6d0ef8776fb635aec1bcb95829748e55cb2ae9101cecf48766cb03ce7" exitCode=0 Feb 01 07:37:30 crc kubenswrapper[4835]: I0201 07:37:30.461956 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpzbj" event={"ID":"619d1e1e-0c68-4844-86de-2e62153f4f43","Type":"ContainerDied","Data":"53b1ebf6d0ef8776fb635aec1bcb95829748e55cb2ae9101cecf48766cb03ce7"} Feb 01 07:37:30 crc kubenswrapper[4835]: I0201 07:37:30.899846 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-db-sync-ll8z7"] Feb 01 07:37:30 crc kubenswrapper[4835]: I0201 07:37:30.901805 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:30 crc kubenswrapper[4835]: I0201 07:37:30.904877 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-barbican-dockercfg-jfvt4" Feb 01 07:37:30 crc kubenswrapper[4835]: I0201 07:37:30.907087 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-config-data" Feb 01 07:37:30 crc kubenswrapper[4835]: I0201 07:37:30.914058 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-ll8z7"] Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.064481 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fffb\" (UniqueName: \"kubernetes.io/projected/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-kube-api-access-2fffb\") pod \"barbican-db-sync-ll8z7\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.064547 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-db-sync-config-data\") pod \"barbican-db-sync-ll8z7\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.165593 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-db-sync-config-data\") pod \"barbican-db-sync-ll8z7\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.165771 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fffb\" (UniqueName: \"kubernetes.io/projected/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-kube-api-access-2fffb\") pod \"barbican-db-sync-ll8z7\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.173674 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-db-sync-config-data\") pod \"barbican-db-sync-ll8z7\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.181860 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fffb\" (UniqueName: \"kubernetes.io/projected/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-kube-api-access-2fffb\") pod \"barbican-db-sync-ll8z7\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.221610 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.757975 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.875347 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-utilities\") pod \"619d1e1e-0c68-4844-86de-2e62153f4f43\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.875763 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-catalog-content\") pod \"619d1e1e-0c68-4844-86de-2e62153f4f43\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.875804 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srpqr\" (UniqueName: \"kubernetes.io/projected/619d1e1e-0c68-4844-86de-2e62153f4f43-kube-api-access-srpqr\") pod \"619d1e1e-0c68-4844-86de-2e62153f4f43\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.876010 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-utilities" (OuterVolumeSpecName: "utilities") pod "619d1e1e-0c68-4844-86de-2e62153f4f43" (UID: "619d1e1e-0c68-4844-86de-2e62153f4f43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.876199 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.881141 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619d1e1e-0c68-4844-86de-2e62153f4f43-kube-api-access-srpqr" (OuterVolumeSpecName: "kube-api-access-srpqr") pod "619d1e1e-0c68-4844-86de-2e62153f4f43" (UID: "619d1e1e-0c68-4844-86de-2e62153f4f43"). InnerVolumeSpecName "kube-api-access-srpqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.900983 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "619d1e1e-0c68-4844-86de-2e62153f4f43" (UID: "619d1e1e-0c68-4844-86de-2e62153f4f43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.978180 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.978210 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srpqr\" (UniqueName: \"kubernetes.io/projected/619d1e1e-0c68-4844-86de-2e62153f4f43-kube-api-access-srpqr\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.010206 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-ll8z7"] Feb 01 07:37:32 crc kubenswrapper[4835]: W0201 07:37:32.016786 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb13e8606_6ec5_4e1b_a3fd_30f8eac5809a.slice/crio-9519543add700251272f3dd89a59c596cde81a6a29a8642f40d681e2fccdc8e6 WatchSource:0}: Error finding container 9519543add700251272f3dd89a59c596cde81a6a29a8642f40d681e2fccdc8e6: Status 404 returned error can't find the container with id 9519543add700251272f3dd89a59c596cde81a6a29a8642f40d681e2fccdc8e6 Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.291230 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-tj2nn"] Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.477259 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" event={"ID":"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a","Type":"ContainerStarted","Data":"9519543add700251272f3dd89a59c596cde81a6a29a8642f40d681e2fccdc8e6"} Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.479710 4835 generic.go:334] "Generic (PLEG): container finished" podID="7f1e8788-786f-4f9d-b492-3a036764b28d" containerID="f2587070d1a982cc5d125873eda276b59552d3d086a9a0ac397df794ea67afbb" exitCode=0 Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.479828 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w65gv" event={"ID":"7f1e8788-786f-4f9d-b492-3a036764b28d","Type":"ContainerDied","Data":"f2587070d1a982cc5d125873eda276b59552d3d086a9a0ac397df794ea67afbb"} Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.484085 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpzbj" event={"ID":"619d1e1e-0c68-4844-86de-2e62153f4f43","Type":"ContainerDied","Data":"1265b6bb64ba3f48c50d80f4e805869952e6bb5b9b995bee99530e9f77977489"} Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.484172 4835 scope.go:117] "RemoveContainer" containerID="53b1ebf6d0ef8776fb635aec1bcb95829748e55cb2ae9101cecf48766cb03ce7" Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.484237 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.493667 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-tj2nn" event={"ID":"ebf9c948-3fde-47f0-aa35-856193c1a275","Type":"ContainerStarted","Data":"e03bb36533b8d4d27c58c7e06075e235155cbf7c1c853020cf017b528786cd03"} Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.506039 4835 scope.go:117] "RemoveContainer" containerID="9f2a1ab62add49cdff4bda274340a783a0c8dd6e31e40a7b2352055515842834" Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.539725 4835 scope.go:117] "RemoveContainer" containerID="e7201f59379d2bead3c65bd0afefdb43d2476d1d60b92329b0df28725c4698f2" Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.544782 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpzbj"] Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.548725 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpzbj"] Feb 01 07:37:33 crc kubenswrapper[4835]: I0201 07:37:33.583392 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" path="/var/lib/kubelet/pods/619d1e1e-0c68-4844-86de-2e62153f4f43/volumes" Feb 01 07:37:38 crc kubenswrapper[4835]: I0201 07:37:38.547032 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w65gv" event={"ID":"7f1e8788-786f-4f9d-b492-3a036764b28d","Type":"ContainerStarted","Data":"b778322d63fc3addc10376802f0efc0ab9a182e92c0872cc9682ddb7c5728a45"} Feb 01 07:37:38 crc kubenswrapper[4835]: I0201 07:37:38.549246 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-tj2nn" event={"ID":"ebf9c948-3fde-47f0-aa35-856193c1a275","Type":"ContainerStarted","Data":"e4867ce2d606303d7b7174df4fede4e9d40b112eacbbd0776384f2c027a9d972"} Feb 01 07:37:38 crc kubenswrapper[4835]: I0201 07:37:38.551437 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" event={"ID":"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a","Type":"ContainerStarted","Data":"2b8ab5a3d71979bd71932b8afef7987524df6361e18ab704eace9a5d232c62ee"} Feb 01 07:37:38 crc kubenswrapper[4835]: I0201 07:37:38.578972 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w65gv" podStartSLOduration=7.027833842 podStartE2EDuration="13.578952676s" podCreationTimestamp="2026-02-01 07:37:25 +0000 UTC" firstStartedPulling="2026-02-01 07:37:27.424285689 +0000 UTC m=+920.544722153" lastFinishedPulling="2026-02-01 07:37:33.975404553 +0000 UTC m=+927.095840987" observedRunningTime="2026-02-01 07:37:38.574942142 +0000 UTC m=+931.695378586" watchObservedRunningTime="2026-02-01 07:37:38.578952676 +0000 UTC m=+931.699389110" Feb 01 07:37:38 crc kubenswrapper[4835]: I0201 07:37:38.596881 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" podStartSLOduration=2.978741166 podStartE2EDuration="8.596853621s" podCreationTimestamp="2026-02-01 07:37:30 +0000 UTC" firstStartedPulling="2026-02-01 07:37:32.019658999 +0000 UTC m=+925.140095433" lastFinishedPulling="2026-02-01 07:37:37.637771454 +0000 UTC m=+930.758207888" observedRunningTime="2026-02-01 07:37:38.590685091 +0000 UTC m=+931.711121535" watchObservedRunningTime="2026-02-01 07:37:38.596853621 +0000 UTC m=+931.717290095" Feb 01 07:37:38 crc kubenswrapper[4835]: I0201 07:37:38.617614 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-tj2nn" podStartSLOduration=4.270106433 podStartE2EDuration="9.617586451s" podCreationTimestamp="2026-02-01 07:37:29 +0000 UTC" firstStartedPulling="2026-02-01 07:37:32.307592196 +0000 UTC m=+925.428028630" lastFinishedPulling="2026-02-01 07:37:37.655072214 +0000 UTC m=+930.775508648" observedRunningTime="2026-02-01 07:37:38.611635026 +0000 UTC m=+931.732071480" watchObservedRunningTime="2026-02-01 07:37:38.617586451 +0000 UTC m=+931.738022905" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.117777 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fsrwb"] Feb 01 07:37:39 crc kubenswrapper[4835]: E0201 07:37:39.118648 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="extract-utilities" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.118842 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="extract-utilities" Feb 01 07:37:39 crc kubenswrapper[4835]: E0201 07:37:39.119038 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="registry-server" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.119186 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="registry-server" Feb 01 07:37:39 crc kubenswrapper[4835]: E0201 07:37:39.119361 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="extract-content" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.119520 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="extract-content" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.119973 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="registry-server" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.122545 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.125939 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsrwb"] Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.286117 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nqrp\" (UniqueName: \"kubernetes.io/projected/607e5b0f-62c9-4e68-9491-bd902f239991-kube-api-access-6nqrp\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.286223 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-utilities\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.286278 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-catalog-content\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.387753 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nqrp\" (UniqueName: \"kubernetes.io/projected/607e5b0f-62c9-4e68-9491-bd902f239991-kube-api-access-6nqrp\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.387902 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-utilities\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.387940 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-catalog-content\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.388589 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-utilities\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.388734 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-catalog-content\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.409534 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nqrp\" (UniqueName: \"kubernetes.io/projected/607e5b0f-62c9-4e68-9491-bd902f239991-kube-api-access-6nqrp\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.443849 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.854945 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.855325 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.888386 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.987093 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsrwb"] Feb 01 07:37:40 crc kubenswrapper[4835]: I0201 07:37:40.583794 4835 generic.go:334] "Generic (PLEG): container finished" podID="607e5b0f-62c9-4e68-9491-bd902f239991" containerID="d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb" exitCode=0 Feb 01 07:37:40 crc kubenswrapper[4835]: I0201 07:37:40.583895 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrwb" event={"ID":"607e5b0f-62c9-4e68-9491-bd902f239991","Type":"ContainerDied","Data":"d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb"} Feb 01 07:37:40 crc kubenswrapper[4835]: I0201 07:37:40.584098 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrwb" event={"ID":"607e5b0f-62c9-4e68-9491-bd902f239991","Type":"ContainerStarted","Data":"9acfc6d0be7db460ac004d096dd484152438d2aac4cef33e843db2f818b0265e"} Feb 01 07:37:41 crc kubenswrapper[4835]: I0201 07:37:41.594980 4835 generic.go:334] "Generic (PLEG): container finished" podID="b13e8606-6ec5-4e1b-a3fd-30f8eac5809a" containerID="2b8ab5a3d71979bd71932b8afef7987524df6361e18ab704eace9a5d232c62ee" exitCode=0 Feb 01 07:37:41 crc kubenswrapper[4835]: I0201 07:37:41.595123 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" event={"ID":"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a","Type":"ContainerDied","Data":"2b8ab5a3d71979bd71932b8afef7987524df6361e18ab704eace9a5d232c62ee"} Feb 01 07:37:41 crc kubenswrapper[4835]: I0201 07:37:41.597498 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrwb" event={"ID":"607e5b0f-62c9-4e68-9491-bd902f239991","Type":"ContainerStarted","Data":"d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85"} Feb 01 07:37:42 crc kubenswrapper[4835]: I0201 07:37:42.609506 4835 generic.go:334] "Generic (PLEG): container finished" podID="607e5b0f-62c9-4e68-9491-bd902f239991" containerID="d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85" exitCode=0 Feb 01 07:37:42 crc kubenswrapper[4835]: I0201 07:37:42.610652 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrwb" event={"ID":"607e5b0f-62c9-4e68-9491-bd902f239991","Type":"ContainerDied","Data":"d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85"} Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.018850 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.183051 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fffb\" (UniqueName: \"kubernetes.io/projected/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-kube-api-access-2fffb\") pod \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.183296 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-db-sync-config-data\") pod \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.189955 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b13e8606-6ec5-4e1b-a3fd-30f8eac5809a" (UID: "b13e8606-6ec5-4e1b-a3fd-30f8eac5809a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.193603 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-kube-api-access-2fffb" (OuterVolumeSpecName: "kube-api-access-2fffb") pod "b13e8606-6ec5-4e1b-a3fd-30f8eac5809a" (UID: "b13e8606-6ec5-4e1b-a3fd-30f8eac5809a"). InnerVolumeSpecName "kube-api-access-2fffb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.285289 4835 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.285329 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fffb\" (UniqueName: \"kubernetes.io/projected/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-kube-api-access-2fffb\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.619397 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.619435 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" event={"ID":"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a","Type":"ContainerDied","Data":"9519543add700251272f3dd89a59c596cde81a6a29a8642f40d681e2fccdc8e6"} Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.619594 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9519543add700251272f3dd89a59c596cde81a6a29a8642f40d681e2fccdc8e6" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.623704 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrwb" event={"ID":"607e5b0f-62c9-4e68-9491-bd902f239991","Type":"ContainerStarted","Data":"17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b"} Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.651508 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fsrwb" podStartSLOduration=2.187116436 podStartE2EDuration="4.651491845s" podCreationTimestamp="2026-02-01 07:37:39 +0000 UTC" firstStartedPulling="2026-02-01 07:37:40.585162383 +0000 UTC m=+933.705598817" lastFinishedPulling="2026-02-01 07:37:43.049537762 +0000 UTC m=+936.169974226" observedRunningTime="2026-02-01 07:37:43.6474639 +0000 UTC m=+936.767900344" watchObservedRunningTime="2026-02-01 07:37:43.651491845 +0000 UTC m=+936.771928279" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.869180 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-worker-794b798997-b6znz"] Feb 01 07:37:43 crc kubenswrapper[4835]: E0201 07:37:43.869512 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13e8606-6ec5-4e1b-a3fd-30f8eac5809a" containerName="barbican-db-sync" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.869524 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13e8606-6ec5-4e1b-a3fd-30f8eac5809a" containerName="barbican-db-sync" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.869662 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b13e8606-6ec5-4e1b-a3fd-30f8eac5809a" containerName="barbican-db-sync" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.870379 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.872192 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-config-data" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.872378 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-worker-config-data" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.872841 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-barbican-dockercfg-jfvt4" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.880877 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6"] Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.882039 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.886575 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6"] Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.886746 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-keystone-listener-config-data" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.911945 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-worker-794b798997-b6znz"] Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.966982 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-api-6966d58856-gg77m"] Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.968193 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.982623 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-api-config-data" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.982806 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-api-6966d58856-gg77m"] Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995119 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h274\" (UniqueName: \"kubernetes.io/projected/8653dceb-2d4e-419e-aa35-37bdca49dc2c-kube-api-access-7h274\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995154 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-config-data-custom\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995191 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-logs\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995217 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8653dceb-2d4e-419e-aa35-37bdca49dc2c-config-data\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995245 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9nmb\" (UniqueName: \"kubernetes.io/projected/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-kube-api-access-j9nmb\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995272 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8653dceb-2d4e-419e-aa35-37bdca49dc2c-logs\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995311 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8653dceb-2d4e-419e-aa35-37bdca49dc2c-config-data-custom\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995334 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-config-data\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096139 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9nmb\" (UniqueName: \"kubernetes.io/projected/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-kube-api-access-j9nmb\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096381 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82m54\" (UniqueName: \"kubernetes.io/projected/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-kube-api-access-82m54\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096490 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-logs\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096577 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8653dceb-2d4e-419e-aa35-37bdca49dc2c-logs\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096678 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8653dceb-2d4e-419e-aa35-37bdca49dc2c-config-data-custom\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096767 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-config-data\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096848 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-config-data\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096928 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h274\" (UniqueName: \"kubernetes.io/projected/8653dceb-2d4e-419e-aa35-37bdca49dc2c-kube-api-access-7h274\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.097002 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-config-data-custom\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.097081 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-config-data-custom\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.097158 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-logs\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.097234 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8653dceb-2d4e-419e-aa35-37bdca49dc2c-config-data\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.097283 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8653dceb-2d4e-419e-aa35-37bdca49dc2c-logs\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.097705 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-logs\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.105265 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-config-data\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.106532 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8653dceb-2d4e-419e-aa35-37bdca49dc2c-config-data\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.108065 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-config-data-custom\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.110077 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8653dceb-2d4e-419e-aa35-37bdca49dc2c-config-data-custom\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.128665 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9nmb\" (UniqueName: \"kubernetes.io/projected/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-kube-api-access-j9nmb\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.176082 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h274\" (UniqueName: \"kubernetes.io/projected/8653dceb-2d4e-419e-aa35-37bdca49dc2c-kube-api-access-7h274\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.198223 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-config-data\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.198542 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-config-data-custom\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.198683 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82m54\" (UniqueName: \"kubernetes.io/projected/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-kube-api-access-82m54\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.198782 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-logs\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.199227 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-logs\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.201843 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-config-data-custom\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.202189 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-config-data\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.204641 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.215878 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.225919 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82m54\" (UniqueName: \"kubernetes.io/projected/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-kube-api-access-82m54\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.279680 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.554063 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6"] Feb 01 07:37:44 crc kubenswrapper[4835]: W0201 07:37:44.565361 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8653dceb_2d4e_419e_aa35_37bdca49dc2c.slice/crio-63c3ce822aa5a38e5757556c4ca57aa00c93048fb3884613914d9646af24c806 WatchSource:0}: Error finding container 63c3ce822aa5a38e5757556c4ca57aa00c93048fb3884613914d9646af24c806: Status 404 returned error can't find the container with id 63c3ce822aa5a38e5757556c4ca57aa00c93048fb3884613914d9646af24c806 Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.604780 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-api-6966d58856-gg77m"] Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.634641 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" event={"ID":"8653dceb-2d4e-419e-aa35-37bdca49dc2c","Type":"ContainerStarted","Data":"63c3ce822aa5a38e5757556c4ca57aa00c93048fb3884613914d9646af24c806"} Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.636624 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" event={"ID":"6a69ee37-d1ea-4c2f-880a-1edb52d4352c","Type":"ContainerStarted","Data":"63484c50bc011e64f76e80d08706847ecf383710bbc20b9bc1954d22e851a72b"} Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.659470 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-worker-794b798997-b6znz"] Feb 01 07:37:44 crc kubenswrapper[4835]: W0201 07:37:44.664105 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8bf5a1c_707a_4858_a716_7bc593ef0fc3.slice/crio-f1ab31cb5b21b1aea1ce644128a9b00fbf971979563ea56f1dbc928a4f19ce1b WatchSource:0}: Error finding container f1ab31cb5b21b1aea1ce644128a9b00fbf971979563ea56f1dbc928a4f19ce1b: Status 404 returned error can't find the container with id f1ab31cb5b21b1aea1ce644128a9b00fbf971979563ea56f1dbc928a4f19ce1b Feb 01 07:37:45 crc kubenswrapper[4835]: I0201 07:37:45.645962 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" event={"ID":"c8bf5a1c-707a-4858-a716-7bc593ef0fc3","Type":"ContainerStarted","Data":"f1ab31cb5b21b1aea1ce644128a9b00fbf971979563ea56f1dbc928a4f19ce1b"} Feb 01 07:37:45 crc kubenswrapper[4835]: I0201 07:37:45.852494 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:45 crc kubenswrapper[4835]: I0201 07:37:45.852577 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:45 crc kubenswrapper[4835]: I0201 07:37:45.901009 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:46 crc kubenswrapper[4835]: I0201 07:37:46.714196 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:47 crc kubenswrapper[4835]: I0201 07:37:47.596091 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w65gv"] Feb 01 07:37:47 crc kubenswrapper[4835]: I0201 07:37:47.665294 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" event={"ID":"6a69ee37-d1ea-4c2f-880a-1edb52d4352c","Type":"ContainerStarted","Data":"f5b8ee84687ed8aca28c18c3766e832ef3a4c90568a55d15a8379eeace0bb974"} Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.310658 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5blqv"] Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.311552 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5blqv" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="registry-server" containerID="cri-o://4b4accff2f1a20d0e288fd1c22d16a0996201d0dc3273c256de8cfeb83f7a5c2" gracePeriod=2 Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.681451 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" event={"ID":"6a69ee37-d1ea-4c2f-880a-1edb52d4352c","Type":"ContainerStarted","Data":"4491c22e6fe8d03497b625c05fa97472dbd12f6f97b5767941284a9200d468a1"} Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.682583 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.682611 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.690259 4835 generic.go:334] "Generic (PLEG): container finished" podID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerID="4b4accff2f1a20d0e288fd1c22d16a0996201d0dc3273c256de8cfeb83f7a5c2" exitCode=0 Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.690370 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5blqv" event={"ID":"48972eb7-80de-4d1a-b9c1-adf412bd3531","Type":"ContainerDied","Data":"4b4accff2f1a20d0e288fd1c22d16a0996201d0dc3273c256de8cfeb83f7a5c2"} Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.707360 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" podStartSLOduration=5.70734596 podStartE2EDuration="5.70734596s" podCreationTimestamp="2026-02-01 07:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:37:48.704898916 +0000 UTC m=+941.825335350" watchObservedRunningTime="2026-02-01 07:37:48.70734596 +0000 UTC m=+941.827782394" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.018429 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.178819 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62w42\" (UniqueName: \"kubernetes.io/projected/48972eb7-80de-4d1a-b9c1-adf412bd3531-kube-api-access-62w42\") pod \"48972eb7-80de-4d1a-b9c1-adf412bd3531\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.178889 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-utilities\") pod \"48972eb7-80de-4d1a-b9c1-adf412bd3531\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.179036 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-catalog-content\") pod \"48972eb7-80de-4d1a-b9c1-adf412bd3531\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.179587 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-utilities" (OuterVolumeSpecName: "utilities") pod "48972eb7-80de-4d1a-b9c1-adf412bd3531" (UID: "48972eb7-80de-4d1a-b9c1-adf412bd3531"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.184641 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48972eb7-80de-4d1a-b9c1-adf412bd3531-kube-api-access-62w42" (OuterVolumeSpecName: "kube-api-access-62w42") pod "48972eb7-80de-4d1a-b9c1-adf412bd3531" (UID: "48972eb7-80de-4d1a-b9c1-adf412bd3531"). InnerVolumeSpecName "kube-api-access-62w42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.234398 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48972eb7-80de-4d1a-b9c1-adf412bd3531" (UID: "48972eb7-80de-4d1a-b9c1-adf412bd3531"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.280299 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.280327 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62w42\" (UniqueName: \"kubernetes.io/projected/48972eb7-80de-4d1a-b9c1-adf412bd3531-kube-api-access-62w42\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.280338 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.444292 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.444334 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.494187 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.701036 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" event={"ID":"8653dceb-2d4e-419e-aa35-37bdca49dc2c","Type":"ContainerStarted","Data":"e8b8e34d7de9a1640fffa41d3521471455d5d34db5f3b102a283b235dd926882"} Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.702600 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" event={"ID":"c8bf5a1c-707a-4858-a716-7bc593ef0fc3","Type":"ContainerStarted","Data":"789ce88b44a0f90da4252454c52e4c6c9443355b68015994137c57083a98f4e5"} Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.705071 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5blqv" event={"ID":"48972eb7-80de-4d1a-b9c1-adf412bd3531","Type":"ContainerDied","Data":"a97613cab5446cbb6022f66ef99ec2081a9134140b311ba32389e80a2e221cbc"} Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.705135 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.705175 4835 scope.go:117] "RemoveContainer" containerID="4b4accff2f1a20d0e288fd1c22d16a0996201d0dc3273c256de8cfeb83f7a5c2" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.730468 4835 scope.go:117] "RemoveContainer" containerID="00639fbfdc8c05a878182afacfc54aac4d6d97d80b8d202f1d59fcc0b702129d" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.735985 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5blqv"] Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.742243 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5blqv"] Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.769068 4835 scope.go:117] "RemoveContainer" containerID="d5e2f5d1534650a4cf1433bf132faf98e02e52decf048ace44fbb7b0f61e32fe" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.797521 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.886870 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:50 crc kubenswrapper[4835]: I0201 07:37:50.713710 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" event={"ID":"8653dceb-2d4e-419e-aa35-37bdca49dc2c","Type":"ContainerStarted","Data":"0d322a883867b3891f61684cb07c6a5d3acb96769e7caab946e9d0a9c59890a4"} Feb 01 07:37:50 crc kubenswrapper[4835]: I0201 07:37:50.715669 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" event={"ID":"c8bf5a1c-707a-4858-a716-7bc593ef0fc3","Type":"ContainerStarted","Data":"84d14344b79c1cc2799abe8f41203ecef8883d25ad9a2e7cc85c543fa2ae24d2"} Feb 01 07:37:50 crc kubenswrapper[4835]: I0201 07:37:50.744649 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" podStartSLOduration=2.873613706 podStartE2EDuration="7.744624423s" podCreationTimestamp="2026-02-01 07:37:43 +0000 UTC" firstStartedPulling="2026-02-01 07:37:44.567251687 +0000 UTC m=+937.687688121" lastFinishedPulling="2026-02-01 07:37:49.438262404 +0000 UTC m=+942.558698838" observedRunningTime="2026-02-01 07:37:50.735029694 +0000 UTC m=+943.855466178" watchObservedRunningTime="2026-02-01 07:37:50.744624423 +0000 UTC m=+943.865060867" Feb 01 07:37:50 crc kubenswrapper[4835]: I0201 07:37:50.757762 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" podStartSLOduration=2.983854232 podStartE2EDuration="7.757740044s" podCreationTimestamp="2026-02-01 07:37:43 +0000 UTC" firstStartedPulling="2026-02-01 07:37:44.666373684 +0000 UTC m=+937.786810128" lastFinishedPulling="2026-02-01 07:37:49.440259506 +0000 UTC m=+942.560695940" observedRunningTime="2026-02-01 07:37:50.75409817 +0000 UTC m=+943.874534654" watchObservedRunningTime="2026-02-01 07:37:50.757740044 +0000 UTC m=+943.878176498" Feb 01 07:37:51 crc kubenswrapper[4835]: I0201 07:37:51.576555 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" path="/var/lib/kubelet/pods/48972eb7-80de-4d1a-b9c1-adf412bd3531/volumes" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.108218 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsrwb"] Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.109825 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fsrwb" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="registry-server" containerID="cri-o://17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b" gracePeriod=2 Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.582220 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.657800 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-catalog-content\") pod \"607e5b0f-62c9-4e68-9491-bd902f239991\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.658095 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-utilities\") pod \"607e5b0f-62c9-4e68-9491-bd902f239991\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.658180 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nqrp\" (UniqueName: \"kubernetes.io/projected/607e5b0f-62c9-4e68-9491-bd902f239991-kube-api-access-6nqrp\") pod \"607e5b0f-62c9-4e68-9491-bd902f239991\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.662210 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-utilities" (OuterVolumeSpecName: "utilities") pod "607e5b0f-62c9-4e68-9491-bd902f239991" (UID: "607e5b0f-62c9-4e68-9491-bd902f239991"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.680601 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/607e5b0f-62c9-4e68-9491-bd902f239991-kube-api-access-6nqrp" (OuterVolumeSpecName: "kube-api-access-6nqrp") pod "607e5b0f-62c9-4e68-9491-bd902f239991" (UID: "607e5b0f-62c9-4e68-9491-bd902f239991"). InnerVolumeSpecName "kube-api-access-6nqrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.746886 4835 generic.go:334] "Generic (PLEG): container finished" podID="607e5b0f-62c9-4e68-9491-bd902f239991" containerID="17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b" exitCode=0 Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.746928 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrwb" event={"ID":"607e5b0f-62c9-4e68-9491-bd902f239991","Type":"ContainerDied","Data":"17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b"} Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.746973 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.747003 4835 scope.go:117] "RemoveContainer" containerID="17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.746990 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrwb" event={"ID":"607e5b0f-62c9-4e68-9491-bd902f239991","Type":"ContainerDied","Data":"9acfc6d0be7db460ac004d096dd484152438d2aac4cef33e843db2f818b0265e"} Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.759558 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nqrp\" (UniqueName: \"kubernetes.io/projected/607e5b0f-62c9-4e68-9491-bd902f239991-kube-api-access-6nqrp\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.759590 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.765225 4835 scope.go:117] "RemoveContainer" containerID="d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.783462 4835 scope.go:117] "RemoveContainer" containerID="d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.814144 4835 scope.go:117] "RemoveContainer" containerID="17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b" Feb 01 07:37:53 crc kubenswrapper[4835]: E0201 07:37:53.819539 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b\": container with ID starting with 17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b not found: ID does not exist" containerID="17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.819582 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b"} err="failed to get container status \"17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b\": rpc error: code = NotFound desc = could not find container \"17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b\": container with ID starting with 17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b not found: ID does not exist" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.819606 4835 scope.go:117] "RemoveContainer" containerID="d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85" Feb 01 07:37:53 crc kubenswrapper[4835]: E0201 07:37:53.820691 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85\": container with ID starting with d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85 not found: ID does not exist" containerID="d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.820733 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85"} err="failed to get container status \"d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85\": rpc error: code = NotFound desc = could not find container \"d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85\": container with ID starting with d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85 not found: ID does not exist" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.820763 4835 scope.go:117] "RemoveContainer" containerID="d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb" Feb 01 07:37:53 crc kubenswrapper[4835]: E0201 07:37:53.821195 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb\": container with ID starting with d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb not found: ID does not exist" containerID="d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.821233 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb"} err="failed to get container status \"d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb\": rpc error: code = NotFound desc = could not find container \"d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb\": container with ID starting with d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb not found: ID does not exist" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.079544 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "607e5b0f-62c9-4e68-9491-bd902f239991" (UID: "607e5b0f-62c9-4e68-9491-bd902f239991"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.165397 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.380461 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsrwb"] Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.388634 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fsrwb"] Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.765786 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5"] Feb 01 07:37:54 crc kubenswrapper[4835]: E0201 07:37:54.766350 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="extract-utilities" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766365 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="extract-utilities" Feb 01 07:37:54 crc kubenswrapper[4835]: E0201 07:37:54.766386 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="registry-server" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766392 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="registry-server" Feb 01 07:37:54 crc kubenswrapper[4835]: E0201 07:37:54.766403 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="registry-server" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766425 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="registry-server" Feb 01 07:37:54 crc kubenswrapper[4835]: E0201 07:37:54.766435 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="extract-content" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766441 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="extract-content" Feb 01 07:37:54 crc kubenswrapper[4835]: E0201 07:37:54.766450 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="extract-utilities" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766455 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="extract-utilities" Feb 01 07:37:54 crc kubenswrapper[4835]: E0201 07:37:54.766467 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="extract-content" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766472 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="extract-content" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766592 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="registry-server" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766606 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="registry-server" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.767487 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.770467 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j4xxm" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.781844 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5"] Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.874616 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-bundle\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.874952 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg6d8\" (UniqueName: \"kubernetes.io/projected/846fe1f2-f96b-4447-9336-d58ac094d486-kube-api-access-sg6d8\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.875099 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-util\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.976598 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg6d8\" (UniqueName: \"kubernetes.io/projected/846fe1f2-f96b-4447-9336-d58ac094d486-kube-api-access-sg6d8\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.977321 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-util\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.977940 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-util\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.978501 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-bundle\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.978623 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-bundle\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.002466 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg6d8\" (UniqueName: \"kubernetes.io/projected/846fe1f2-f96b-4447-9336-d58ac094d486-kube-api-access-sg6d8\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.088794 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.195251 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.195319 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:37:55 crc kubenswrapper[4835]: W0201 07:37:55.382274 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod846fe1f2_f96b_4447_9336_d58ac094d486.slice/crio-6a623df9300c3a987a49e17cbb0ddefbc2c603076d04262ccb90daf41befab43 WatchSource:0}: Error finding container 6a623df9300c3a987a49e17cbb0ddefbc2c603076d04262ccb90daf41befab43: Status 404 returned error can't find the container with id 6a623df9300c3a987a49e17cbb0ddefbc2c603076d04262ccb90daf41befab43 Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.393562 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5"] Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.576626 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" path="/var/lib/kubelet/pods/607e5b0f-62c9-4e68-9491-bd902f239991/volumes" Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.654927 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.665926 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.768661 4835 generic.go:334] "Generic (PLEG): container finished" podID="846fe1f2-f96b-4447-9336-d58ac094d486" containerID="1b077152679f61684216febdeb298224f68b80e1c19f22bc8bc12d2392a4404e" exitCode=0 Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.769600 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" event={"ID":"846fe1f2-f96b-4447-9336-d58ac094d486","Type":"ContainerDied","Data":"1b077152679f61684216febdeb298224f68b80e1c19f22bc8bc12d2392a4404e"} Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.769625 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" event={"ID":"846fe1f2-f96b-4447-9336-d58ac094d486","Type":"ContainerStarted","Data":"6a623df9300c3a987a49e17cbb0ddefbc2c603076d04262ccb90daf41befab43"} Feb 01 07:37:56 crc kubenswrapper[4835]: I0201 07:37:56.778574 4835 generic.go:334] "Generic (PLEG): container finished" podID="846fe1f2-f96b-4447-9336-d58ac094d486" containerID="ca6222a2ba1c866e30bf8acbe47b4077ad304afecf74b62ff428461243b2d713" exitCode=0 Feb 01 07:37:56 crc kubenswrapper[4835]: I0201 07:37:56.778625 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" event={"ID":"846fe1f2-f96b-4447-9336-d58ac094d486","Type":"ContainerDied","Data":"ca6222a2ba1c866e30bf8acbe47b4077ad304afecf74b62ff428461243b2d713"} Feb 01 07:37:57 crc kubenswrapper[4835]: I0201 07:37:57.791370 4835 generic.go:334] "Generic (PLEG): container finished" podID="846fe1f2-f96b-4447-9336-d58ac094d486" containerID="6c373212d8d8abaae56569d8aa1acf8724c03fadcbcbe09fe69771c9bf4e7225" exitCode=0 Feb 01 07:37:57 crc kubenswrapper[4835]: I0201 07:37:57.791701 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" event={"ID":"846fe1f2-f96b-4447-9336-d58ac094d486","Type":"ContainerDied","Data":"6c373212d8d8abaae56569d8aa1acf8724c03fadcbcbe09fe69771c9bf4e7225"} Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.204966 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.269300 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-bundle\") pod \"846fe1f2-f96b-4447-9336-d58ac094d486\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.269491 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg6d8\" (UniqueName: \"kubernetes.io/projected/846fe1f2-f96b-4447-9336-d58ac094d486-kube-api-access-sg6d8\") pod \"846fe1f2-f96b-4447-9336-d58ac094d486\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.269539 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-util\") pod \"846fe1f2-f96b-4447-9336-d58ac094d486\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.270550 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-bundle" (OuterVolumeSpecName: "bundle") pod "846fe1f2-f96b-4447-9336-d58ac094d486" (UID: "846fe1f2-f96b-4447-9336-d58ac094d486"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.276602 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/846fe1f2-f96b-4447-9336-d58ac094d486-kube-api-access-sg6d8" (OuterVolumeSpecName: "kube-api-access-sg6d8") pod "846fe1f2-f96b-4447-9336-d58ac094d486" (UID: "846fe1f2-f96b-4447-9336-d58ac094d486"). InnerVolumeSpecName "kube-api-access-sg6d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.302397 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-util" (OuterVolumeSpecName: "util") pod "846fe1f2-f96b-4447-9336-d58ac094d486" (UID: "846fe1f2-f96b-4447-9336-d58ac094d486"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.371950 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.372009 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg6d8\" (UniqueName: \"kubernetes.io/projected/846fe1f2-f96b-4447-9336-d58ac094d486-kube-api-access-sg6d8\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.372032 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-util\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.822631 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" event={"ID":"846fe1f2-f96b-4447-9336-d58ac094d486","Type":"ContainerDied","Data":"6a623df9300c3a987a49e17cbb0ddefbc2c603076d04262ccb90daf41befab43"} Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.822690 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a623df9300c3a987a49e17cbb0ddefbc2c603076d04262ccb90daf41befab43" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.822751 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.951099 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r"] Feb 01 07:38:09 crc kubenswrapper[4835]: E0201 07:38:09.952241 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846fe1f2-f96b-4447-9336-d58ac094d486" containerName="pull" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.952261 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="846fe1f2-f96b-4447-9336-d58ac094d486" containerName="pull" Feb 01 07:38:09 crc kubenswrapper[4835]: E0201 07:38:09.952286 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846fe1f2-f96b-4447-9336-d58ac094d486" containerName="util" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.952298 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="846fe1f2-f96b-4447-9336-d58ac094d486" containerName="util" Feb 01 07:38:09 crc kubenswrapper[4835]: E0201 07:38:09.952316 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846fe1f2-f96b-4447-9336-d58ac094d486" containerName="extract" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.952329 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="846fe1f2-f96b-4447-9336-d58ac094d486" containerName="extract" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.952566 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="846fe1f2-f96b-4447-9336-d58ac094d486" containerName="extract" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.953208 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.955618 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-wsg8l" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.956141 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-service-cert" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.971124 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r"] Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.043884 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-apiservice-cert\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.043962 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5t64\" (UniqueName: \"kubernetes.io/projected/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-kube-api-access-j5t64\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.043989 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-webhook-cert\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.145331 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5t64\" (UniqueName: \"kubernetes.io/projected/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-kube-api-access-j5t64\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.145389 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-webhook-cert\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.145495 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-apiservice-cert\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.154138 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-webhook-cert\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.154559 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-apiservice-cert\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.164170 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5t64\" (UniqueName: \"kubernetes.io/projected/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-kube-api-access-j5t64\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.275032 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.542510 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r"] Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.922684 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" event={"ID":"26de1ab5-eb0d-4fe4-83ad-25f2262bd958","Type":"ContainerStarted","Data":"13a8d88350d4c01f9f1a85d724fb28342b65da1d8600ee4a5441f680d10bc42f"} Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.921976 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xlq66"] Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.924673 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.935846 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xlq66"] Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.968991 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" event={"ID":"26de1ab5-eb0d-4fe4-83ad-25f2262bd958","Type":"ContainerStarted","Data":"b82f3d8afa05a0091c353c49b5d86bc1d0e51d1ce5a5ce9b648ab9e32d83eb1b"} Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.970346 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.983924 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-catalog-content\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.984065 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-utilities\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.984143 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz8wl\" (UniqueName: \"kubernetes.io/projected/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-kube-api-access-gz8wl\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.005490 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" podStartSLOduration=2.397000895 podStartE2EDuration="4.005475669s" podCreationTimestamp="2026-02-01 07:38:09 +0000 UTC" firstStartedPulling="2026-02-01 07:38:10.549104767 +0000 UTC m=+963.669541211" lastFinishedPulling="2026-02-01 07:38:12.157579551 +0000 UTC m=+965.278015985" observedRunningTime="2026-02-01 07:38:12.995351315 +0000 UTC m=+966.115787799" watchObservedRunningTime="2026-02-01 07:38:13.005475669 +0000 UTC m=+966.125912103" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.085369 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz8wl\" (UniqueName: \"kubernetes.io/projected/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-kube-api-access-gz8wl\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.085520 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-catalog-content\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.085569 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-utilities\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.086055 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-utilities\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.086061 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-catalog-content\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.106258 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz8wl\" (UniqueName: \"kubernetes.io/projected/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-kube-api-access-gz8wl\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.296895 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.765980 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xlq66"] Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.975364 4835 generic.go:334] "Generic (PLEG): container finished" podID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerID="074b7b427f3abecc72a80949ae3752e1f1c013ce713d02c038f8c2a763ae2cc2" exitCode=0 Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.975409 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlq66" event={"ID":"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d","Type":"ContainerDied","Data":"074b7b427f3abecc72a80949ae3752e1f1c013ce713d02c038f8c2a763ae2cc2"} Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.975737 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlq66" event={"ID":"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d","Type":"ContainerStarted","Data":"1f8b2e37088d7c85f0e3747b5b4f057bfc4c3678cedcd231a9d989615deb01d9"} Feb 01 07:38:14 crc kubenswrapper[4835]: I0201 07:38:14.984622 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlq66" event={"ID":"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d","Type":"ContainerStarted","Data":"efd6d27aa99471d986be400fe860f654afc230c59f8ab263518d756dc64c1864"} Feb 01 07:38:15 crc kubenswrapper[4835]: I0201 07:38:15.992557 4835 generic.go:334] "Generic (PLEG): container finished" podID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerID="efd6d27aa99471d986be400fe860f654afc230c59f8ab263518d756dc64c1864" exitCode=0 Feb 01 07:38:15 crc kubenswrapper[4835]: I0201 07:38:15.992610 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlq66" event={"ID":"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d","Type":"ContainerDied","Data":"efd6d27aa99471d986be400fe860f654afc230c59f8ab263518d756dc64c1864"} Feb 01 07:38:18 crc kubenswrapper[4835]: I0201 07:38:18.011466 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlq66" event={"ID":"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d","Type":"ContainerStarted","Data":"1ba5facccd298a7a96c3ca2eb9b5d6ac7ba944b347689b88a082e975e765f27a"} Feb 01 07:38:18 crc kubenswrapper[4835]: I0201 07:38:18.053903 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xlq66" podStartSLOduration=2.563360806 podStartE2EDuration="6.053879628s" podCreationTimestamp="2026-02-01 07:38:12 +0000 UTC" firstStartedPulling="2026-02-01 07:38:13.97813113 +0000 UTC m=+967.098567564" lastFinishedPulling="2026-02-01 07:38:17.468649912 +0000 UTC m=+970.589086386" observedRunningTime="2026-02-01 07:38:18.053618582 +0000 UTC m=+971.174055056" watchObservedRunningTime="2026-02-01 07:38:18.053879628 +0000 UTC m=+971.174316092" Feb 01 07:38:20 crc kubenswrapper[4835]: I0201 07:38:20.281351 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.598251 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.603189 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.605053 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.605061 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.606198 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-hwgzn" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.606381 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.625955 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.727716 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.727783 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-lock\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.727805 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt6t9\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-kube-api-access-wt6t9\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.727837 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.727953 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-cache\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.828901 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.828968 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-lock\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.829000 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt6t9\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-kube-api-access-wt6t9\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: E0201 07:38:22.829037 4835 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Feb 01 07:38:22 crc kubenswrapper[4835]: E0201 07:38:22.829056 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Feb 01 07:38:22 crc kubenswrapper[4835]: E0201 07:38:22.829102 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift podName:1edd7394-0f8e-4271-8774-f228946e62f3 nodeName:}" failed. No retries permitted until 2026-02-01 07:38:23.329086461 +0000 UTC m=+976.449522895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift") pod "swift-storage-0" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3") : configmap "swift-ring-files" not found Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.829053 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.829212 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-cache\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.834745 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-cache\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.836857 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-lock\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.840616 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") device mount path \"/mnt/openstack/pv10\"" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.873881 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.877613 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt6t9\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-kube-api-access-wt6t9\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.297120 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.297456 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.334931 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:23 crc kubenswrapper[4835]: E0201 07:38:23.335119 4835 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Feb 01 07:38:23 crc kubenswrapper[4835]: E0201 07:38:23.335134 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Feb 01 07:38:23 crc kubenswrapper[4835]: E0201 07:38:23.335200 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift podName:1edd7394-0f8e-4271-8774-f228946e62f3 nodeName:}" failed. No retries permitted until 2026-02-01 07:38:24.335169094 +0000 UTC m=+977.455605528 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift") pod "swift-storage-0" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3") : configmap "swift-ring-files" not found Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.598681 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r"] Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.599957 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.603606 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.610909 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r"] Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.739650 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccb8908-ffc6-4032-8907-da7491bf9304-config-data\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.739718 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sl55\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-kube-api-access-7sl55\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.740192 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccb8908-ffc6-4032-8907-da7491bf9304-run-httpd\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.740217 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccb8908-ffc6-4032-8907-da7491bf9304-log-httpd\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.740242 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.841137 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccb8908-ffc6-4032-8907-da7491bf9304-config-data\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.841199 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sl55\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-kube-api-access-7sl55\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.841276 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccb8908-ffc6-4032-8907-da7491bf9304-run-httpd\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.841294 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccb8908-ffc6-4032-8907-da7491bf9304-log-httpd\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.841312 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: E0201 07:38:23.841589 4835 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Feb 01 07:38:23 crc kubenswrapper[4835]: E0201 07:38:23.841607 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r: configmap "swift-ring-files" not found Feb 01 07:38:23 crc kubenswrapper[4835]: E0201 07:38:23.841658 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift podName:8ccb8908-ffc6-4032-8907-da7491bf9304 nodeName:}" failed. No retries permitted until 2026-02-01 07:38:24.341639387 +0000 UTC m=+977.462075821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift") pod "swift-proxy-7d8cf99555-6vq9r" (UID: "8ccb8908-ffc6-4032-8907-da7491bf9304") : configmap "swift-ring-files" not found Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.843046 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccb8908-ffc6-4032-8907-da7491bf9304-run-httpd\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.843140 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccb8908-ffc6-4032-8907-da7491bf9304-log-httpd\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.847065 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccb8908-ffc6-4032-8907-da7491bf9304-config-data\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.858795 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sl55\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-kube-api-access-7sl55\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:24 crc kubenswrapper[4835]: I0201 07:38:24.333142 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xlq66" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="registry-server" probeResult="failure" output=< Feb 01 07:38:24 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 01 07:38:24 crc kubenswrapper[4835]: > Feb 01 07:38:24 crc kubenswrapper[4835]: I0201 07:38:24.347788 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:24 crc kubenswrapper[4835]: I0201 07:38:24.347863 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:24 crc kubenswrapper[4835]: E0201 07:38:24.348001 4835 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Feb 01 07:38:24 crc kubenswrapper[4835]: E0201 07:38:24.348034 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r: configmap "swift-ring-files" not found Feb 01 07:38:24 crc kubenswrapper[4835]: E0201 07:38:24.348032 4835 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Feb 01 07:38:24 crc kubenswrapper[4835]: E0201 07:38:24.348066 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Feb 01 07:38:24 crc kubenswrapper[4835]: E0201 07:38:24.348092 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift podName:8ccb8908-ffc6-4032-8907-da7491bf9304 nodeName:}" failed. No retries permitted until 2026-02-01 07:38:25.348070018 +0000 UTC m=+978.468506452 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift") pod "swift-proxy-7d8cf99555-6vq9r" (UID: "8ccb8908-ffc6-4032-8907-da7491bf9304") : configmap "swift-ring-files" not found Feb 01 07:38:24 crc kubenswrapper[4835]: E0201 07:38:24.348128 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift podName:1edd7394-0f8e-4271-8774-f228946e62f3 nodeName:}" failed. No retries permitted until 2026-02-01 07:38:26.348106829 +0000 UTC m=+979.468543323 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift") pod "swift-storage-0" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3") : configmap "swift-ring-files" not found Feb 01 07:38:25 crc kubenswrapper[4835]: I0201 07:38:25.191686 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:38:25 crc kubenswrapper[4835]: I0201 07:38:25.191751 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:38:25 crc kubenswrapper[4835]: I0201 07:38:25.191805 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:38:25 crc kubenswrapper[4835]: I0201 07:38:25.192482 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ccb60f81487a17626bf941abb39b090063342e92bdcf8f103587fb1912c3a05"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:38:25 crc kubenswrapper[4835]: I0201 07:38:25.192545 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://9ccb60f81487a17626bf941abb39b090063342e92bdcf8f103587fb1912c3a05" gracePeriod=600 Feb 01 07:38:25 crc kubenswrapper[4835]: I0201 07:38:25.363535 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:25 crc kubenswrapper[4835]: E0201 07:38:25.363769 4835 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Feb 01 07:38:25 crc kubenswrapper[4835]: E0201 07:38:25.363787 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r: configmap "swift-ring-files" not found Feb 01 07:38:25 crc kubenswrapper[4835]: E0201 07:38:25.363852 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift podName:8ccb8908-ffc6-4032-8907-da7491bf9304 nodeName:}" failed. No retries permitted until 2026-02-01 07:38:27.363834088 +0000 UTC m=+980.484270522 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift") pod "swift-proxy-7d8cf99555-6vq9r" (UID: "8ccb8908-ffc6-4032-8907-da7491bf9304") : configmap "swift-ring-files" not found Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.066785 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="9ccb60f81487a17626bf941abb39b090063342e92bdcf8f103587fb1912c3a05" exitCode=0 Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.066867 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"9ccb60f81487a17626bf941abb39b090063342e92bdcf8f103587fb1912c3a05"} Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.067207 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"19428f932c6c98ecc149a201b9cb2f965faa26b06f4629d2e4af89e8080412f3"} Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.067242 4835 scope.go:117] "RemoveContainer" containerID="6da4a09917e14a43c6af10d69dcc7ba3d2cd41146e8c294ea85744f0374d0efa" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.381148 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:26 crc kubenswrapper[4835]: E0201 07:38:26.381371 4835 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Feb 01 07:38:26 crc kubenswrapper[4835]: E0201 07:38:26.381620 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Feb 01 07:38:26 crc kubenswrapper[4835]: E0201 07:38:26.381714 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift podName:1edd7394-0f8e-4271-8774-f228946e62f3 nodeName:}" failed. No retries permitted until 2026-02-01 07:38:30.381685253 +0000 UTC m=+983.502121727 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift") pod "swift-storage-0" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3") : configmap "swift-ring-files" not found Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.696693 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-w2wt7"] Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.698492 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.701378 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.701787 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.729709 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-w2wt7"] Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.889671 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ffk\" (UniqueName: \"kubernetes.io/projected/b45c05e1-195b-43c0-a44d-1d1c50886dfc-kube-api-access-k9ffk\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.889753 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.889917 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b45c05e1-195b-43c0-a44d-1d1c50886dfc-etc-swift\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.890066 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b45c05e1-195b-43c0-a44d-1d1c50886dfc-dispersionconf\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.890159 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b45c05e1-195b-43c0-a44d-1d1c50886dfc-swiftconf\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.890273 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-scripts\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.991942 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-scripts\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.992054 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ffk\" (UniqueName: \"kubernetes.io/projected/b45c05e1-195b-43c0-a44d-1d1c50886dfc-kube-api-access-k9ffk\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.992112 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.992189 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b45c05e1-195b-43c0-a44d-1d1c50886dfc-etc-swift\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: E0201 07:38:26.992230 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.992271 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b45c05e1-195b-43c0-a44d-1d1c50886dfc-dispersionconf\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: E0201 07:38:26.992293 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:38:27.492277292 +0000 UTC m=+980.612713726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.992358 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b45c05e1-195b-43c0-a44d-1d1c50886dfc-swiftconf\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.993029 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-scripts\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.993573 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b45c05e1-195b-43c0-a44d-1d1c50886dfc-etc-swift\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:27 crc kubenswrapper[4835]: I0201 07:38:27.002881 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b45c05e1-195b-43c0-a44d-1d1c50886dfc-swiftconf\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:27 crc kubenswrapper[4835]: I0201 07:38:27.018367 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b45c05e1-195b-43c0-a44d-1d1c50886dfc-dispersionconf\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:27 crc kubenswrapper[4835]: I0201 07:38:27.018775 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ffk\" (UniqueName: \"kubernetes.io/projected/b45c05e1-195b-43c0-a44d-1d1c50886dfc-kube-api-access-k9ffk\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:27 crc kubenswrapper[4835]: I0201 07:38:27.399746 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:27 crc kubenswrapper[4835]: I0201 07:38:27.405973 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:27 crc kubenswrapper[4835]: I0201 07:38:27.501606 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:27 crc kubenswrapper[4835]: E0201 07:38:27.501815 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:38:27 crc kubenswrapper[4835]: E0201 07:38:27.501923 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:38:28.501896108 +0000 UTC m=+981.622332582 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:38:27 crc kubenswrapper[4835]: I0201 07:38:27.534841 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:28 crc kubenswrapper[4835]: I0201 07:38:28.518712 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:28 crc kubenswrapper[4835]: E0201 07:38:28.518920 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:38:28 crc kubenswrapper[4835]: E0201 07:38:28.519395 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:38:30.519370683 +0000 UTC m=+983.639807207 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:38:28 crc kubenswrapper[4835]: I0201 07:38:28.599220 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r"] Feb 01 07:38:28 crc kubenswrapper[4835]: W0201 07:38:28.610538 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ccb8908_ffc6_4032_8907_da7491bf9304.slice/crio-8908cf22a853343b93df37395aa541eabbcfc98751ced4a9119ab669313c07d7 WatchSource:0}: Error finding container 8908cf22a853343b93df37395aa541eabbcfc98751ced4a9119ab669313c07d7: Status 404 returned error can't find the container with id 8908cf22a853343b93df37395aa541eabbcfc98751ced4a9119ab669313c07d7 Feb 01 07:38:29 crc kubenswrapper[4835]: I0201 07:38:29.093403 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"8908cf22a853343b93df37395aa541eabbcfc98751ced4a9119ab669313c07d7"} Feb 01 07:38:30 crc kubenswrapper[4835]: I0201 07:38:30.457226 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:30 crc kubenswrapper[4835]: I0201 07:38:30.464179 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:30 crc kubenswrapper[4835]: I0201 07:38:30.559202 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:30 crc kubenswrapper[4835]: E0201 07:38:30.559393 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:38:30 crc kubenswrapper[4835]: E0201 07:38:30.559511 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:38:34.559481087 +0000 UTC m=+987.679917541 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:38:30 crc kubenswrapper[4835]: I0201 07:38:30.719165 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:32 crc kubenswrapper[4835]: I0201 07:38:32.115491 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"95517c83908e8e06df5319306b204bf523fb0839c1f428a4cd25e36acc6805d7"} Feb 01 07:38:32 crc kubenswrapper[4835]: I0201 07:38:32.235750 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:38:32 crc kubenswrapper[4835]: W0201 07:38:32.242585 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1edd7394_0f8e_4271_8774_f228946e62f3.slice/crio-965930581ebfe6a06bce16c42d9dbc0702e4b9210c5c9c9057f64d28fcd26803 WatchSource:0}: Error finding container 965930581ebfe6a06bce16c42d9dbc0702e4b9210c5c9c9057f64d28fcd26803: Status 404 returned error can't find the container with id 965930581ebfe6a06bce16c42d9dbc0702e4b9210c5c9c9057f64d28fcd26803 Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.123554 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"965930581ebfe6a06bce16c42d9dbc0702e4b9210c5c9c9057f64d28fcd26803"} Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.126419 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="25cca2f3f5f0ca4235e68b5a9b94250ec3bd171877b74e1618d32e349210087f" exitCode=1 Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.126457 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"25cca2f3f5f0ca4235e68b5a9b94250ec3bd171877b74e1618d32e349210087f"} Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.126583 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.127181 4835 scope.go:117] "RemoveContainer" containerID="25cca2f3f5f0ca4235e68b5a9b94250ec3bd171877b74e1618d32e349210087f" Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.416761 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.467474 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.535522 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.136084 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399"} Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.136454 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.138299 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="0aa899c6c8fea3baf53158221f585cfd84d23b944687209ecc3c91475a6c13e1" exitCode=1 Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.138401 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a"} Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.138480 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407"} Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.138501 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"0aa899c6c8fea3baf53158221f585cfd84d23b944687209ecc3c91475a6c13e1"} Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.138517 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541"} Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.160824 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podStartSLOduration=7.972956376 podStartE2EDuration="11.16080586s" podCreationTimestamp="2026-02-01 07:38:23 +0000 UTC" firstStartedPulling="2026-02-01 07:38:28.61513586 +0000 UTC m=+981.735572334" lastFinishedPulling="2026-02-01 07:38:31.802985374 +0000 UTC m=+984.923421818" observedRunningTime="2026-02-01 07:38:34.154386331 +0000 UTC m=+987.274822785" watchObservedRunningTime="2026-02-01 07:38:34.16080586 +0000 UTC m=+987.281242304" Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.660952 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:34 crc kubenswrapper[4835]: E0201 07:38:34.661224 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:38:34 crc kubenswrapper[4835]: E0201 07:38:34.661574 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:38:42.661542212 +0000 UTC m=+995.781978676 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:38:35 crc kubenswrapper[4835]: I0201 07:38:35.152781 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399" exitCode=1 Feb 01 07:38:35 crc kubenswrapper[4835]: I0201 07:38:35.152832 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399"} Feb 01 07:38:35 crc kubenswrapper[4835]: I0201 07:38:35.152883 4835 scope.go:117] "RemoveContainer" containerID="25cca2f3f5f0ca4235e68b5a9b94250ec3bd171877b74e1618d32e349210087f" Feb 01 07:38:35 crc kubenswrapper[4835]: I0201 07:38:35.158045 4835 scope.go:117] "RemoveContainer" containerID="8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399" Feb 01 07:38:35 crc kubenswrapper[4835]: E0201 07:38:35.164026 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.167066 4835 scope.go:117] "RemoveContainer" containerID="8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399" Feb 01 07:38:36 crc kubenswrapper[4835]: E0201 07:38:36.167331 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.176168 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="99dd4e1721ebaf1dd026d0a1154a6d27d931d29c79ee7f9d577ac388cfe1e0bd" exitCode=1 Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.176218 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"0a25ac97b5294b86a329b0b8a00b6a7ec519f70771d4bc4890be6a3eaa416540"} Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.176260 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10"} Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.176275 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"99dd4e1721ebaf1dd026d0a1154a6d27d931d29c79ee7f9d577ac388cfe1e0bd"} Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.176291 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a"} Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.535197 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.917926 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xlq66"] Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.918633 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xlq66" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="registry-server" containerID="cri-o://1ba5facccd298a7a96c3ca2eb9b5d6ac7ba944b347689b88a082e975e765f27a" gracePeriod=2 Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.205762 4835 generic.go:334] "Generic (PLEG): container finished" podID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerID="1ba5facccd298a7a96c3ca2eb9b5d6ac7ba944b347689b88a082e975e765f27a" exitCode=0 Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.206258 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlq66" event={"ID":"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d","Type":"ContainerDied","Data":"1ba5facccd298a7a96c3ca2eb9b5d6ac7ba944b347689b88a082e975e765f27a"} Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.219572 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4"} Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.219873 4835 scope.go:117] "RemoveContainer" containerID="8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399" Feb 01 07:38:37 crc kubenswrapper[4835]: E0201 07:38:37.220169 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.229590 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.381533 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.419890 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz8wl\" (UniqueName: \"kubernetes.io/projected/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-kube-api-access-gz8wl\") pod \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.420032 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-catalog-content\") pod \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.420065 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-utilities\") pod \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.421719 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-utilities" (OuterVolumeSpecName: "utilities") pod "2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" (UID: "2b9e4f72-eb97-434b-aba4-ebf37ef1f51d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.428685 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-kube-api-access-gz8wl" (OuterVolumeSpecName: "kube-api-access-gz8wl") pod "2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" (UID: "2b9e4f72-eb97-434b-aba4-ebf37ef1f51d"). InnerVolumeSpecName "kube-api-access-gz8wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.521907 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.521950 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz8wl\" (UniqueName: \"kubernetes.io/projected/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-kube-api-access-gz8wl\") on node \"crc\" DevicePath \"\"" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.538321 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.599181 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" (UID: "2b9e4f72-eb97-434b-aba4-ebf37ef1f51d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.627324 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.228301 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlq66" event={"ID":"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d","Type":"ContainerDied","Data":"1f8b2e37088d7c85f0e3747b5b4f057bfc4c3678cedcd231a9d989615deb01d9"} Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.228726 4835 scope.go:117] "RemoveContainer" containerID="1ba5facccd298a7a96c3ca2eb9b5d6ac7ba944b347689b88a082e975e765f27a" Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.228367 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.237733 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b"} Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.237887 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876"} Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.237979 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"f70585ca73d7397e897eb3941142c52d65b1003b4040f8c826ddc548b6f8f0d4"} Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.238065 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"ce9f6e51f49479167482c65a57955f65790012dea41865e75c987db5f30a8585"} Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.238141 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14"} Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.238199 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c"} Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.238678 4835 scope.go:117] "RemoveContainer" containerID="0aa899c6c8fea3baf53158221f585cfd84d23b944687209ecc3c91475a6c13e1" Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.238825 4835 scope.go:117] "RemoveContainer" containerID="99dd4e1721ebaf1dd026d0a1154a6d27d931d29c79ee7f9d577ac388cfe1e0bd" Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.258696 4835 scope.go:117] "RemoveContainer" containerID="efd6d27aa99471d986be400fe860f654afc230c59f8ab263518d756dc64c1864" Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.305758 4835 scope.go:117] "RemoveContainer" containerID="074b7b427f3abecc72a80949ae3752e1f1c013ce713d02c038f8c2a763ae2cc2" Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.313063 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xlq66"] Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.319767 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xlq66"] Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.258394 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="f70585ca73d7397e897eb3941142c52d65b1003b4040f8c826ddc548b6f8f0d4" exitCode=1 Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.258939 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" exitCode=1 Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.258965 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" exitCode=1 Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.258634 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"f70585ca73d7397e897eb3941142c52d65b1003b4040f8c826ddc548b6f8f0d4"} Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.259043 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea"} Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.259067 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad"} Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.259093 4835 scope.go:117] "RemoveContainer" containerID="99dd4e1721ebaf1dd026d0a1154a6d27d931d29c79ee7f9d577ac388cfe1e0bd" Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.259325 4835 scope.go:117] "RemoveContainer" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.259495 4835 scope.go:117] "RemoveContainer" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.259711 4835 scope.go:117] "RemoveContainer" containerID="f70585ca73d7397e897eb3941142c52d65b1003b4040f8c826ddc548b6f8f0d4" Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.334148 4835 scope.go:117] "RemoveContainer" containerID="0aa899c6c8fea3baf53158221f585cfd84d23b944687209ecc3c91475a6c13e1" Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.537360 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:39 crc kubenswrapper[4835]: E0201 07:38:39.549049 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.578968 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" path="/var/lib/kubelet/pods/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d/volumes" Feb 01 07:38:40 crc kubenswrapper[4835]: I0201 07:38:40.282006 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd" exitCode=1 Feb 01 07:38:40 crc kubenswrapper[4835]: I0201 07:38:40.282078 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd"} Feb 01 07:38:40 crc kubenswrapper[4835]: I0201 07:38:40.282126 4835 scope.go:117] "RemoveContainer" containerID="f70585ca73d7397e897eb3941142c52d65b1003b4040f8c826ddc548b6f8f0d4" Feb 01 07:38:40 crc kubenswrapper[4835]: I0201 07:38:40.283232 4835 scope.go:117] "RemoveContainer" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" Feb 01 07:38:40 crc kubenswrapper[4835]: I0201 07:38:40.283399 4835 scope.go:117] "RemoveContainer" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" Feb 01 07:38:40 crc kubenswrapper[4835]: I0201 07:38:40.283708 4835 scope.go:117] "RemoveContainer" containerID="d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd" Feb 01 07:38:40 crc kubenswrapper[4835]: E0201 07:38:40.284581 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:38:41 crc kubenswrapper[4835]: I0201 07:38:41.316722 4835 scope.go:117] "RemoveContainer" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" Feb 01 07:38:41 crc kubenswrapper[4835]: I0201 07:38:41.317232 4835 scope.go:117] "RemoveContainer" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" Feb 01 07:38:41 crc kubenswrapper[4835]: I0201 07:38:41.317476 4835 scope.go:117] "RemoveContainer" containerID="d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd" Feb 01 07:38:41 crc kubenswrapper[4835]: E0201 07:38:41.317945 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:38:42 crc kubenswrapper[4835]: I0201 07:38:42.536733 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:42 crc kubenswrapper[4835]: I0201 07:38:42.539531 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:42 crc kubenswrapper[4835]: I0201 07:38:42.703322 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:42 crc kubenswrapper[4835]: E0201 07:38:42.703562 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:38:42 crc kubenswrapper[4835]: E0201 07:38:42.703663 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:38:58.703640203 +0000 UTC m=+1011.824076677 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:38:44 crc kubenswrapper[4835]: I0201 07:38:44.353481 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="ce9f6e51f49479167482c65a57955f65790012dea41865e75c987db5f30a8585" exitCode=1 Feb 01 07:38:44 crc kubenswrapper[4835]: I0201 07:38:44.353711 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"ce9f6e51f49479167482c65a57955f65790012dea41865e75c987db5f30a8585"} Feb 01 07:38:44 crc kubenswrapper[4835]: I0201 07:38:44.354702 4835 scope.go:117] "RemoveContainer" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" Feb 01 07:38:44 crc kubenswrapper[4835]: I0201 07:38:44.354795 4835 scope.go:117] "RemoveContainer" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" Feb 01 07:38:44 crc kubenswrapper[4835]: I0201 07:38:44.354895 4835 scope.go:117] "RemoveContainer" containerID="ce9f6e51f49479167482c65a57955f65790012dea41865e75c987db5f30a8585" Feb 01 07:38:44 crc kubenswrapper[4835]: I0201 07:38:44.354917 4835 scope.go:117] "RemoveContainer" containerID="d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd" Feb 01 07:38:44 crc kubenswrapper[4835]: E0201 07:38:44.601401 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.371859 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"feb2c5663f63accc851097dd3f05b8f4f19e67efe2c719e8d3a4538c5779d9f1"} Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.374018 4835 scope.go:117] "RemoveContainer" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.374201 4835 scope.go:117] "RemoveContainer" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.374400 4835 scope.go:117] "RemoveContainer" containerID="d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd" Feb 01 07:38:45 crc kubenswrapper[4835]: E0201 07:38:45.374816 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.538460 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.538571 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.539539 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"95517c83908e8e06df5319306b204bf523fb0839c1f428a4cd25e36acc6805d7"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.539583 4835 scope.go:117] "RemoveContainer" containerID="8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.539634 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://95517c83908e8e06df5319306b204bf523fb0839c1f428a4cd25e36acc6805d7" gracePeriod=30 Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.541122 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:46 crc kubenswrapper[4835]: I0201 07:38:46.380567 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="95517c83908e8e06df5319306b204bf523fb0839c1f428a4cd25e36acc6805d7" exitCode=0 Feb 01 07:38:46 crc kubenswrapper[4835]: I0201 07:38:46.381142 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"95517c83908e8e06df5319306b204bf523fb0839c1f428a4cd25e36acc6805d7"} Feb 01 07:38:46 crc kubenswrapper[4835]: I0201 07:38:46.381222 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12"} Feb 01 07:38:46 crc kubenswrapper[4835]: I0201 07:38:46.381292 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"75d0c7a0859d358275bbd9cd41f9c9912bc0c5d0048e4cc77e453810a0147a9c"} Feb 01 07:38:46 crc kubenswrapper[4835]: I0201 07:38:46.382166 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:46 crc kubenswrapper[4835]: I0201 07:38:46.382308 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:47 crc kubenswrapper[4835]: I0201 07:38:47.397637 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" exitCode=1 Feb 01 07:38:47 crc kubenswrapper[4835]: I0201 07:38:47.397703 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12"} Feb 01 07:38:47 crc kubenswrapper[4835]: I0201 07:38:47.398147 4835 scope.go:117] "RemoveContainer" containerID="8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399" Feb 01 07:38:47 crc kubenswrapper[4835]: I0201 07:38:47.398841 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:38:47 crc kubenswrapper[4835]: E0201 07:38:47.399319 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:48 crc kubenswrapper[4835]: I0201 07:38:48.420621 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:38:48 crc kubenswrapper[4835]: E0201 07:38:48.421170 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:48 crc kubenswrapper[4835]: I0201 07:38:48.535869 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:49 crc kubenswrapper[4835]: I0201 07:38:49.430722 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:38:49 crc kubenswrapper[4835]: E0201 07:38:49.431564 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:51 crc kubenswrapper[4835]: I0201 07:38:51.539995 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:52 crc kubenswrapper[4835]: I0201 07:38:52.536761 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:54 crc kubenswrapper[4835]: I0201 07:38:54.537683 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:55 crc kubenswrapper[4835]: I0201 07:38:55.567243 4835 scope.go:117] "RemoveContainer" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" Feb 01 07:38:55 crc kubenswrapper[4835]: I0201 07:38:55.567326 4835 scope.go:117] "RemoveContainer" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" Feb 01 07:38:55 crc kubenswrapper[4835]: I0201 07:38:55.567480 4835 scope.go:117] "RemoveContainer" containerID="d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd" Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.502811 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="3c25912e774b4a018588b67eec51d3f705636a69f2e60b464c915225815cf0b0" exitCode=1 Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.503697 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="11e88e751977370741f6b5e960b76603831e02e5e523e8af6f09b7da2bb588cf" exitCode=1 Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.503126 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"e21fb413506e5d62cd5f2d7cf365fc8dc7c34b194da431855832871b91a3eb11"} Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.503801 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"3c25912e774b4a018588b67eec51d3f705636a69f2e60b464c915225815cf0b0"} Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.503833 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"11e88e751977370741f6b5e960b76603831e02e5e523e8af6f09b7da2bb588cf"} Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.503865 4835 scope.go:117] "RemoveContainer" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.504979 4835 scope.go:117] "RemoveContainer" containerID="11e88e751977370741f6b5e960b76603831e02e5e523e8af6f09b7da2bb588cf" Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.505207 4835 scope.go:117] "RemoveContainer" containerID="3c25912e774b4a018588b67eec51d3f705636a69f2e60b464c915225815cf0b0" Feb 01 07:38:56 crc kubenswrapper[4835]: E0201 07:38:56.506248 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.565450 4835 scope.go:117] "RemoveContainer" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.527849 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="e21fb413506e5d62cd5f2d7cf365fc8dc7c34b194da431855832871b91a3eb11" exitCode=1 Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.527907 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"e21fb413506e5d62cd5f2d7cf365fc8dc7c34b194da431855832871b91a3eb11"} Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.527971 4835 scope.go:117] "RemoveContainer" containerID="d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.529220 4835 scope.go:117] "RemoveContainer" containerID="11e88e751977370741f6b5e960b76603831e02e5e523e8af6f09b7da2bb588cf" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.529361 4835 scope.go:117] "RemoveContainer" containerID="3c25912e774b4a018588b67eec51d3f705636a69f2e60b464c915225815cf0b0" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.529594 4835 scope.go:117] "RemoveContainer" containerID="e21fb413506e5d62cd5f2d7cf365fc8dc7c34b194da431855832871b91a3eb11" Feb 01 07:38:57 crc kubenswrapper[4835]: E0201 07:38:57.529930 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.537637 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.537915 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.537964 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.538646 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"75d0c7a0859d358275bbd9cd41f9c9912bc0c5d0048e4cc77e453810a0147a9c"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.538669 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.538695 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://75d0c7a0859d358275bbd9cd41f9c9912bc0c5d0048e4cc77e453810a0147a9c" gracePeriod=30 Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.540018 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:57 crc kubenswrapper[4835]: E0201 07:38:57.860575 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:58 crc kubenswrapper[4835]: I0201 07:38:58.553376 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="75d0c7a0859d358275bbd9cd41f9c9912bc0c5d0048e4cc77e453810a0147a9c" exitCode=0 Feb 01 07:38:58 crc kubenswrapper[4835]: I0201 07:38:58.553608 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"75d0c7a0859d358275bbd9cd41f9c9912bc0c5d0048e4cc77e453810a0147a9c"} Feb 01 07:38:58 crc kubenswrapper[4835]: I0201 07:38:58.553862 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"fa3f2568319ce6136ef7d36ac06dd33397f56b27f1065bb9754e7a8f9c652732"} Feb 01 07:38:58 crc kubenswrapper[4835]: I0201 07:38:58.553896 4835 scope.go:117] "RemoveContainer" containerID="95517c83908e8e06df5319306b204bf523fb0839c1f428a4cd25e36acc6805d7" Feb 01 07:38:58 crc kubenswrapper[4835]: I0201 07:38:58.554214 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:58 crc kubenswrapper[4835]: I0201 07:38:58.554753 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:38:58 crc kubenswrapper[4835]: E0201 07:38:58.555067 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:58 crc kubenswrapper[4835]: I0201 07:38:58.766454 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:58 crc kubenswrapper[4835]: E0201 07:38:58.766598 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:38:58 crc kubenswrapper[4835]: E0201 07:38:58.767090 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:39:30.767067401 +0000 UTC m=+1043.887503835 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:38:59 crc kubenswrapper[4835]: I0201 07:38:59.565698 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:38:59 crc kubenswrapper[4835]: E0201 07:38:59.565912 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:02 crc kubenswrapper[4835]: I0201 07:39:02.539771 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:03 crc kubenswrapper[4835]: I0201 07:39:03.538059 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:06 crc kubenswrapper[4835]: I0201 07:39:06.539725 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:07 crc kubenswrapper[4835]: I0201 07:39:07.537590 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:09 crc kubenswrapper[4835]: I0201 07:39:09.537649 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:09 crc kubenswrapper[4835]: I0201 07:39:09.538350 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:09 crc kubenswrapper[4835]: I0201 07:39:09.539059 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"fa3f2568319ce6136ef7d36ac06dd33397f56b27f1065bb9754e7a8f9c652732"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:39:09 crc kubenswrapper[4835]: I0201 07:39:09.539097 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:39:09 crc kubenswrapper[4835]: I0201 07:39:09.539135 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://fa3f2568319ce6136ef7d36ac06dd33397f56b27f1065bb9754e7a8f9c652732" gracePeriod=30 Feb 01 07:39:09 crc kubenswrapper[4835]: I0201 07:39:09.546085 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:10 crc kubenswrapper[4835]: I0201 07:39:10.685634 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="fa3f2568319ce6136ef7d36ac06dd33397f56b27f1065bb9754e7a8f9c652732" exitCode=0 Feb 01 07:39:10 crc kubenswrapper[4835]: I0201 07:39:10.687399 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"fa3f2568319ce6136ef7d36ac06dd33397f56b27f1065bb9754e7a8f9c652732"} Feb 01 07:39:10 crc kubenswrapper[4835]: I0201 07:39:10.687547 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e"} Feb 01 07:39:10 crc kubenswrapper[4835]: I0201 07:39:10.687578 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941"} Feb 01 07:39:10 crc kubenswrapper[4835]: I0201 07:39:10.688193 4835 scope.go:117] "RemoveContainer" containerID="75d0c7a0859d358275bbd9cd41f9c9912bc0c5d0048e4cc77e453810a0147a9c" Feb 01 07:39:10 crc kubenswrapper[4835]: I0201 07:39:10.690048 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:10 crc kubenswrapper[4835]: I0201 07:39:10.690115 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:11 crc kubenswrapper[4835]: I0201 07:39:11.701981 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" exitCode=1 Feb 01 07:39:11 crc kubenswrapper[4835]: I0201 07:39:11.702066 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e"} Feb 01 07:39:11 crc kubenswrapper[4835]: I0201 07:39:11.702585 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:39:11 crc kubenswrapper[4835]: I0201 07:39:11.702776 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:11 crc kubenswrapper[4835]: E0201 07:39:11.703103 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:12 crc kubenswrapper[4835]: I0201 07:39:12.535711 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:12 crc kubenswrapper[4835]: I0201 07:39:12.566766 4835 scope.go:117] "RemoveContainer" containerID="11e88e751977370741f6b5e960b76603831e02e5e523e8af6f09b7da2bb588cf" Feb 01 07:39:12 crc kubenswrapper[4835]: I0201 07:39:12.566845 4835 scope.go:117] "RemoveContainer" containerID="3c25912e774b4a018588b67eec51d3f705636a69f2e60b464c915225815cf0b0" Feb 01 07:39:12 crc kubenswrapper[4835]: I0201 07:39:12.567057 4835 scope.go:117] "RemoveContainer" containerID="e21fb413506e5d62cd5f2d7cf365fc8dc7c34b194da431855832871b91a3eb11" Feb 01 07:39:12 crc kubenswrapper[4835]: E0201 07:39:12.567353 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:39:12 crc kubenswrapper[4835]: I0201 07:39:12.723081 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:12 crc kubenswrapper[4835]: E0201 07:39:12.723259 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:13 crc kubenswrapper[4835]: I0201 07:39:13.731178 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:13 crc kubenswrapper[4835]: E0201 07:39:13.731496 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:15 crc kubenswrapper[4835]: I0201 07:39:15.538265 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:17 crc kubenswrapper[4835]: I0201 07:39:17.538571 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:18 crc kubenswrapper[4835]: I0201 07:39:18.537217 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.538127 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.538595 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.539523 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.539566 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.539616 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941" gracePeriod=30 Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.541264 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:21 crc kubenswrapper[4835]: E0201 07:39:21.659994 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.798989 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941" exitCode=0 Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.799071 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941"} Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.799276 4835 scope.go:117] "RemoveContainer" containerID="fa3f2568319ce6136ef7d36ac06dd33397f56b27f1065bb9754e7a8f9c652732" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.799851 4835 scope.go:117] "RemoveContainer" containerID="cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.799888 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:21 crc kubenswrapper[4835]: E0201 07:39:21.800122 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:26 crc kubenswrapper[4835]: I0201 07:39:26.568600 4835 scope.go:117] "RemoveContainer" containerID="11e88e751977370741f6b5e960b76603831e02e5e523e8af6f09b7da2bb588cf" Feb 01 07:39:26 crc kubenswrapper[4835]: I0201 07:39:26.569878 4835 scope.go:117] "RemoveContainer" containerID="3c25912e774b4a018588b67eec51d3f705636a69f2e60b464c915225815cf0b0" Feb 01 07:39:26 crc kubenswrapper[4835]: I0201 07:39:26.570125 4835 scope.go:117] "RemoveContainer" containerID="e21fb413506e5d62cd5f2d7cf365fc8dc7c34b194da431855832871b91a3eb11" Feb 01 07:39:26 crc kubenswrapper[4835]: I0201 07:39:26.886777 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17"} Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918261 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" exitCode=1 Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918317 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="0a25ac97b5294b86a329b0b8a00b6a7ec519f70771d4bc4890be6a3eaa416540" exitCode=1 Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918326 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" exitCode=1 Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918386 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17"} Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918508 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"0a25ac97b5294b86a329b0b8a00b6a7ec519f70771d4bc4890be6a3eaa416540"} Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918538 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e"} Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918560 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c"} Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918597 4835 scope.go:117] "RemoveContainer" containerID="11e88e751977370741f6b5e960b76603831e02e5e523e8af6f09b7da2bb588cf" Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.919169 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.919333 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.919381 4835 scope.go:117] "RemoveContainer" containerID="0a25ac97b5294b86a329b0b8a00b6a7ec519f70771d4bc4890be6a3eaa416540" Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.983330 4835 scope.go:117] "RemoveContainer" containerID="3c25912e774b4a018588b67eec51d3f705636a69f2e60b464c915225815cf0b0" Feb 01 07:39:28 crc kubenswrapper[4835]: E0201 07:39:28.164812 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:39:28 crc kubenswrapper[4835]: I0201 07:39:28.936460 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" exitCode=1 Feb 01 07:39:28 crc kubenswrapper[4835]: I0201 07:39:28.936524 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e"} Feb 01 07:39:28 crc kubenswrapper[4835]: I0201 07:39:28.936634 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"fa5aff8be1093aa2c10f2b4af85287d1729e836661be58a64baa1c833802045c"} Feb 01 07:39:28 crc kubenswrapper[4835]: I0201 07:39:28.936670 4835 scope.go:117] "RemoveContainer" containerID="e21fb413506e5d62cd5f2d7cf365fc8dc7c34b194da431855832871b91a3eb11" Feb 01 07:39:28 crc kubenswrapper[4835]: I0201 07:39:28.937457 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:39:28 crc kubenswrapper[4835]: I0201 07:39:28.937612 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:39:28 crc kubenswrapper[4835]: I0201 07:39:28.937875 4835 scope.go:117] "RemoveContainer" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" Feb 01 07:39:28 crc kubenswrapper[4835]: E0201 07:39:28.938621 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:39:29 crc kubenswrapper[4835]: I0201 07:39:29.957516 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:39:29 crc kubenswrapper[4835]: I0201 07:39:29.957616 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:39:29 crc kubenswrapper[4835]: I0201 07:39:29.957731 4835 scope.go:117] "RemoveContainer" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" Feb 01 07:39:29 crc kubenswrapper[4835]: E0201 07:39:29.958098 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:39:30 crc kubenswrapper[4835]: I0201 07:39:30.770600 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:39:30 crc kubenswrapper[4835]: E0201 07:39:30.770864 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:39:30 crc kubenswrapper[4835]: E0201 07:39:30.771268 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:40:34.771226436 +0000 UTC m=+1107.891662910 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:39:37 crc kubenswrapper[4835]: I0201 07:39:37.576757 4835 scope.go:117] "RemoveContainer" containerID="cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941" Feb 01 07:39:37 crc kubenswrapper[4835]: I0201 07:39:37.577481 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:37 crc kubenswrapper[4835]: E0201 07:39:37.577958 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:43 crc kubenswrapper[4835]: I0201 07:39:43.567969 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:39:43 crc kubenswrapper[4835]: I0201 07:39:43.568808 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:39:43 crc kubenswrapper[4835]: I0201 07:39:43.569018 4835 scope.go:117] "RemoveContainer" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" Feb 01 07:39:43 crc kubenswrapper[4835]: E0201 07:39:43.569515 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:39:49 crc kubenswrapper[4835]: I0201 07:39:49.567288 4835 scope.go:117] "RemoveContainer" containerID="cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941" Feb 01 07:39:49 crc kubenswrapper[4835]: I0201 07:39:49.567988 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:49 crc kubenswrapper[4835]: E0201 07:39:49.779099 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:50 crc kubenswrapper[4835]: I0201 07:39:50.141065 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc"} Feb 01 07:39:50 crc kubenswrapper[4835]: I0201 07:39:50.141453 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:50 crc kubenswrapper[4835]: I0201 07:39:50.141901 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:50 crc kubenswrapper[4835]: E0201 07:39:50.142312 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:51 crc kubenswrapper[4835]: I0201 07:39:51.150124 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:52 crc kubenswrapper[4835]: I0201 07:39:52.158748 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a"} Feb 01 07:39:52 crc kubenswrapper[4835]: I0201 07:39:52.159687 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:53 crc kubenswrapper[4835]: I0201 07:39:53.172009 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" exitCode=1 Feb 01 07:39:53 crc kubenswrapper[4835]: I0201 07:39:53.172073 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a"} Feb 01 07:39:53 crc kubenswrapper[4835]: I0201 07:39:53.172144 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:53 crc kubenswrapper[4835]: I0201 07:39:53.172849 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:39:53 crc kubenswrapper[4835]: E0201 07:39:53.173179 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:54 crc kubenswrapper[4835]: I0201 07:39:54.185534 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:39:54 crc kubenswrapper[4835]: E0201 07:39:54.186501 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:54 crc kubenswrapper[4835]: I0201 07:39:54.189541 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:54 crc kubenswrapper[4835]: I0201 07:39:54.535903 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:54 crc kubenswrapper[4835]: I0201 07:39:54.538473 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:54 crc kubenswrapper[4835]: I0201 07:39:54.568498 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:39:54 crc kubenswrapper[4835]: I0201 07:39:54.568627 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:39:54 crc kubenswrapper[4835]: I0201 07:39:54.568805 4835 scope.go:117] "RemoveContainer" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" Feb 01 07:39:54 crc kubenswrapper[4835]: E0201 07:39:54.569265 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:39:55 crc kubenswrapper[4835]: I0201 07:39:55.197911 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:39:55 crc kubenswrapper[4835]: E0201 07:39:55.198343 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:55 crc kubenswrapper[4835]: I0201 07:39:55.198650 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:57 crc kubenswrapper[4835]: I0201 07:39:57.538258 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:57 crc kubenswrapper[4835]: I0201 07:39:57.539109 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:00 crc kubenswrapper[4835]: I0201 07:40:00.538698 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:00 crc kubenswrapper[4835]: I0201 07:40:00.539511 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:40:00 crc kubenswrapper[4835]: I0201 07:40:00.541532 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:40:00 crc kubenswrapper[4835]: I0201 07:40:00.541612 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:00 crc kubenswrapper[4835]: I0201 07:40:00.541710 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc" gracePeriod=30 Feb 01 07:40:00 crc kubenswrapper[4835]: I0201 07:40:00.542942 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:00 crc kubenswrapper[4835]: E0201 07:40:00.696065 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:01 crc kubenswrapper[4835]: I0201 07:40:01.270775 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc" exitCode=0 Feb 01 07:40:01 crc kubenswrapper[4835]: I0201 07:40:01.270855 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc"} Feb 01 07:40:01 crc kubenswrapper[4835]: I0201 07:40:01.271263 4835 scope.go:117] "RemoveContainer" containerID="cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941" Feb 01 07:40:01 crc kubenswrapper[4835]: I0201 07:40:01.272090 4835 scope.go:117] "RemoveContainer" containerID="3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc" Feb 01 07:40:01 crc kubenswrapper[4835]: I0201 07:40:01.272150 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:01 crc kubenswrapper[4835]: E0201 07:40:01.272720 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:05 crc kubenswrapper[4835]: I0201 07:40:05.567068 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:40:05 crc kubenswrapper[4835]: I0201 07:40:05.567473 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:40:05 crc kubenswrapper[4835]: I0201 07:40:05.567589 4835 scope.go:117] "RemoveContainer" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" Feb 01 07:40:05 crc kubenswrapper[4835]: E0201 07:40:05.567877 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:40:13 crc kubenswrapper[4835]: I0201 07:40:13.566259 4835 scope.go:117] "RemoveContainer" containerID="3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc" Feb 01 07:40:13 crc kubenswrapper[4835]: I0201 07:40:13.566825 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:13 crc kubenswrapper[4835]: E0201 07:40:13.567090 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:18 crc kubenswrapper[4835]: I0201 07:40:18.566614 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:40:18 crc kubenswrapper[4835]: I0201 07:40:18.566916 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:40:18 crc kubenswrapper[4835]: I0201 07:40:18.566997 4835 scope.go:117] "RemoveContainer" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.452145 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" exitCode=1 Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.452539 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" exitCode=1 Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.452259 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b"} Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.452587 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65"} Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.452607 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a"} Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.452627 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.453658 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.453854 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:40:19 crc kubenswrapper[4835]: E0201 07:40:19.463513 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.520111 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:40:20 crc kubenswrapper[4835]: I0201 07:40:20.476807 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" exitCode=1 Feb 01 07:40:20 crc kubenswrapper[4835]: I0201 07:40:20.476886 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b"} Feb 01 07:40:20 crc kubenswrapper[4835]: I0201 07:40:20.477266 4835 scope.go:117] "RemoveContainer" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" Feb 01 07:40:20 crc kubenswrapper[4835]: I0201 07:40:20.479852 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:40:20 crc kubenswrapper[4835]: I0201 07:40:20.480188 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:40:20 crc kubenswrapper[4835]: I0201 07:40:20.481867 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:40:20 crc kubenswrapper[4835]: E0201 07:40:20.482533 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:40:25 crc kubenswrapper[4835]: I0201 07:40:25.191502 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:40:25 crc kubenswrapper[4835]: I0201 07:40:25.191865 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:40:28 crc kubenswrapper[4835]: I0201 07:40:28.566876 4835 scope.go:117] "RemoveContainer" containerID="3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc" Feb 01 07:40:28 crc kubenswrapper[4835]: I0201 07:40:28.567178 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:28 crc kubenswrapper[4835]: E0201 07:40:28.567372 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:29 crc kubenswrapper[4835]: E0201 07:40:29.724037 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:40:30 crc kubenswrapper[4835]: I0201 07:40:30.601297 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:40:31 crc kubenswrapper[4835]: I0201 07:40:31.567857 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:40:31 crc kubenswrapper[4835]: I0201 07:40:31.568594 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:40:31 crc kubenswrapper[4835]: I0201 07:40:31.568802 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:40:31 crc kubenswrapper[4835]: E0201 07:40:31.569366 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:40:34 crc kubenswrapper[4835]: I0201 07:40:34.872968 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:40:34 crc kubenswrapper[4835]: E0201 07:40:34.873197 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:40:34 crc kubenswrapper[4835]: E0201 07:40:34.873677 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:42:36.873657218 +0000 UTC m=+1229.994093672 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:40:43 crc kubenswrapper[4835]: I0201 07:40:43.566665 4835 scope.go:117] "RemoveContainer" containerID="3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc" Feb 01 07:40:43 crc kubenswrapper[4835]: I0201 07:40:43.567379 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:43 crc kubenswrapper[4835]: I0201 07:40:43.568129 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:40:43 crc kubenswrapper[4835]: I0201 07:40:43.568277 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:40:43 crc kubenswrapper[4835]: I0201 07:40:43.568508 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:40:43 crc kubenswrapper[4835]: E0201 07:40:43.569035 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:40:43 crc kubenswrapper[4835]: E0201 07:40:43.778612 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:44 crc kubenswrapper[4835]: I0201 07:40:44.752976 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a"} Feb 01 07:40:44 crc kubenswrapper[4835]: I0201 07:40:44.753285 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:40:44 crc kubenswrapper[4835]: I0201 07:40:44.753811 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:44 crc kubenswrapper[4835]: E0201 07:40:44.754174 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:45 crc kubenswrapper[4835]: I0201 07:40:45.760953 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:45 crc kubenswrapper[4835]: E0201 07:40:45.761656 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:48 crc kubenswrapper[4835]: I0201 07:40:48.540193 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:51 crc kubenswrapper[4835]: I0201 07:40:51.538699 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:52 crc kubenswrapper[4835]: I0201 07:40:52.539887 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.538215 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.538311 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.539247 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.539271 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.539303 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" gracePeriod=30 Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.540111 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:54 crc kubenswrapper[4835]: E0201 07:40:54.668068 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.842676 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" exitCode=0 Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.842690 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a"} Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.842802 4835 scope.go:117] "RemoveContainer" containerID="3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.843785 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.843840 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:54 crc kubenswrapper[4835]: E0201 07:40:54.844568 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:55 crc kubenswrapper[4835]: I0201 07:40:55.192201 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:40:55 crc kubenswrapper[4835]: I0201 07:40:55.192302 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:40:55 crc kubenswrapper[4835]: I0201 07:40:55.567712 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:40:55 crc kubenswrapper[4835]: I0201 07:40:55.567838 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:40:55 crc kubenswrapper[4835]: I0201 07:40:55.568023 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:40:55 crc kubenswrapper[4835]: E0201 07:40:55.568698 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:41:08 crc kubenswrapper[4835]: I0201 07:41:08.567808 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:41:08 crc kubenswrapper[4835]: I0201 07:41:08.568658 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:41:08 crc kubenswrapper[4835]: I0201 07:41:08.568852 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:41:08 crc kubenswrapper[4835]: E0201 07:41:08.569440 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:41:09 crc kubenswrapper[4835]: I0201 07:41:09.567751 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:09 crc kubenswrapper[4835]: I0201 07:41:09.567795 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:41:09 crc kubenswrapper[4835]: E0201 07:41:09.568121 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:21 crc kubenswrapper[4835]: I0201 07:41:21.567369 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:41:21 crc kubenswrapper[4835]: I0201 07:41:21.568068 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:41:21 crc kubenswrapper[4835]: I0201 07:41:21.568249 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:41:21 crc kubenswrapper[4835]: E0201 07:41:21.568827 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:41:24 crc kubenswrapper[4835]: I0201 07:41:24.567571 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:24 crc kubenswrapper[4835]: I0201 07:41:24.567613 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:41:24 crc kubenswrapper[4835]: E0201 07:41:24.846385 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.133898 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50"} Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.134426 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:25 crc kubenswrapper[4835]: E0201 07:41:25.134611 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.134747 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.192521 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.192926 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.193149 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.194547 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19428f932c6c98ecc149a201b9cb2f965faa26b06f4629d2e4af89e8080412f3"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.194845 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://19428f932c6c98ecc149a201b9cb2f965faa26b06f4629d2e4af89e8080412f3" gracePeriod=600 Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.145086 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" exitCode=1 Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.145140 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50"} Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.145806 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.146026 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.146116 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:41:26 crc kubenswrapper[4835]: E0201 07:41:26.146725 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.151020 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="19428f932c6c98ecc149a201b9cb2f965faa26b06f4629d2e4af89e8080412f3" exitCode=0 Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.151086 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"19428f932c6c98ecc149a201b9cb2f965faa26b06f4629d2e4af89e8080412f3"} Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.151138 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"a43725792d229350ec7471be026c4c547e893839692a410ac3e424adc0af5ced"} Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.196123 4835 scope.go:117] "RemoveContainer" containerID="9ccb60f81487a17626bf941abb39b090063342e92bdcf8f103587fb1912c3a05" Feb 01 07:41:27 crc kubenswrapper[4835]: I0201 07:41:27.168498 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:27 crc kubenswrapper[4835]: I0201 07:41:27.168837 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:41:27 crc kubenswrapper[4835]: E0201 07:41:27.169191 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:27 crc kubenswrapper[4835]: I0201 07:41:27.535396 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:41:28 crc kubenswrapper[4835]: I0201 07:41:28.182779 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:28 crc kubenswrapper[4835]: I0201 07:41:28.182822 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:41:28 crc kubenswrapper[4835]: E0201 07:41:28.183170 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:32 crc kubenswrapper[4835]: I0201 07:41:32.568105 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:41:32 crc kubenswrapper[4835]: I0201 07:41:32.568973 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:41:32 crc kubenswrapper[4835]: I0201 07:41:32.569154 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:41:32 crc kubenswrapper[4835]: E0201 07:41:32.569867 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:41:40 crc kubenswrapper[4835]: I0201 07:41:40.567158 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:40 crc kubenswrapper[4835]: I0201 07:41:40.567744 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:41:40 crc kubenswrapper[4835]: E0201 07:41:40.567967 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:44 crc kubenswrapper[4835]: I0201 07:41:44.567154 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:41:44 crc kubenswrapper[4835]: I0201 07:41:44.567820 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:41:44 crc kubenswrapper[4835]: I0201 07:41:44.567910 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:41:45 crc kubenswrapper[4835]: I0201 07:41:45.393067 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" exitCode=1 Feb 01 07:41:45 crc kubenswrapper[4835]: I0201 07:41:45.393154 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6"} Feb 01 07:41:45 crc kubenswrapper[4835]: I0201 07:41:45.393462 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac"} Feb 01 07:41:45 crc kubenswrapper[4835]: I0201 07:41:45.393481 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948"} Feb 01 07:41:45 crc kubenswrapper[4835]: I0201 07:41:45.393505 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:41:45 crc kubenswrapper[4835]: I0201 07:41:45.394524 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:41:45 crc kubenswrapper[4835]: E0201 07:41:45.395243 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.407120 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" exitCode=1 Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.407388 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" exitCode=1 Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.407190 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6"} Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.407448 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac"} Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.407472 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.408130 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.408191 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.408285 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:41:46 crc kubenswrapper[4835]: E0201 07:41:46.408671 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.478334 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:41:47 crc kubenswrapper[4835]: I0201 07:41:47.442982 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:41:47 crc kubenswrapper[4835]: I0201 07:41:47.443105 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:41:47 crc kubenswrapper[4835]: I0201 07:41:47.443281 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:41:47 crc kubenswrapper[4835]: E0201 07:41:47.443842 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:41:55 crc kubenswrapper[4835]: I0201 07:41:55.569306 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:55 crc kubenswrapper[4835]: I0201 07:41:55.570012 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:41:55 crc kubenswrapper[4835]: E0201 07:41:55.570345 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:57 crc kubenswrapper[4835]: I0201 07:41:57.543807 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="feb2c5663f63accc851097dd3f05b8f4f19e67efe2c719e8d3a4538c5779d9f1" exitCode=1 Feb 01 07:41:57 crc kubenswrapper[4835]: I0201 07:41:57.543900 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"feb2c5663f63accc851097dd3f05b8f4f19e67efe2c719e8d3a4538c5779d9f1"} Feb 01 07:41:57 crc kubenswrapper[4835]: I0201 07:41:57.544362 4835 scope.go:117] "RemoveContainer" containerID="ce9f6e51f49479167482c65a57955f65790012dea41865e75c987db5f30a8585" Feb 01 07:41:57 crc kubenswrapper[4835]: I0201 07:41:57.546529 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:41:57 crc kubenswrapper[4835]: I0201 07:41:57.546622 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:41:57 crc kubenswrapper[4835]: I0201 07:41:57.546727 4835 scope.go:117] "RemoveContainer" containerID="feb2c5663f63accc851097dd3f05b8f4f19e67efe2c719e8d3a4538c5779d9f1" Feb 01 07:41:57 crc kubenswrapper[4835]: I0201 07:41:57.546751 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:41:57 crc kubenswrapper[4835]: E0201 07:41:57.547219 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:42:07 crc kubenswrapper[4835]: I0201 07:42:07.585019 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:42:07 crc kubenswrapper[4835]: I0201 07:42:07.585882 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:42:07 crc kubenswrapper[4835]: E0201 07:42:07.587569 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:42:09 crc kubenswrapper[4835]: I0201 07:42:09.567950 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:42:09 crc kubenswrapper[4835]: I0201 07:42:09.568703 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:42:09 crc kubenswrapper[4835]: I0201 07:42:09.568893 4835 scope.go:117] "RemoveContainer" containerID="feb2c5663f63accc851097dd3f05b8f4f19e67efe2c719e8d3a4538c5779d9f1" Feb 01 07:42:09 crc kubenswrapper[4835]: I0201 07:42:09.568917 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:42:09 crc kubenswrapper[4835]: E0201 07:42:09.743742 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:42:10 crc kubenswrapper[4835]: I0201 07:42:10.691482 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"419ab68c1eadc99bff71a26d28334bac6306a91472d2659f54afabe19795872b"} Feb 01 07:42:10 crc kubenswrapper[4835]: I0201 07:42:10.692673 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:42:10 crc kubenswrapper[4835]: I0201 07:42:10.692761 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:42:10 crc kubenswrapper[4835]: I0201 07:42:10.692883 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:42:10 crc kubenswrapper[4835]: E0201 07:42:10.693211 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:42:21 crc kubenswrapper[4835]: I0201 07:42:21.567842 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:42:21 crc kubenswrapper[4835]: I0201 07:42:21.568590 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:42:21 crc kubenswrapper[4835]: E0201 07:42:21.755895 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:42:21 crc kubenswrapper[4835]: I0201 07:42:21.787460 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809"} Feb 01 07:42:21 crc kubenswrapper[4835]: I0201 07:42:21.787700 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:42:21 crc kubenswrapper[4835]: I0201 07:42:21.788066 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:42:21 crc kubenswrapper[4835]: E0201 07:42:21.788447 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:42:22 crc kubenswrapper[4835]: I0201 07:42:22.796039 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:42:22 crc kubenswrapper[4835]: E0201 07:42:22.796610 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:42:25 crc kubenswrapper[4835]: I0201 07:42:25.566785 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:42:25 crc kubenswrapper[4835]: I0201 07:42:25.566873 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:42:25 crc kubenswrapper[4835]: I0201 07:42:25.567008 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:42:25 crc kubenswrapper[4835]: E0201 07:42:25.567378 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:42:27 crc kubenswrapper[4835]: I0201 07:42:27.539604 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:42:27 crc kubenswrapper[4835]: I0201 07:42:27.539867 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:42:30 crc kubenswrapper[4835]: I0201 07:42:30.537987 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:42:32 crc kubenswrapper[4835]: I0201 07:42:32.537539 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.537822 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.537916 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.538662 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.538686 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.538722 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" gracePeriod=30 Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.539573 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:42:33 crc kubenswrapper[4835]: E0201 07:42:33.603012 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:42:33 crc kubenswrapper[4835]: E0201 07:42:33.661710 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.895109 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" exitCode=0 Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.895186 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809"} Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.895493 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.895779 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.896586 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.896641 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:42:33 crc kubenswrapper[4835]: E0201 07:42:33.896981 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:42:36 crc kubenswrapper[4835]: I0201 07:42:36.878859 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:42:36 crc kubenswrapper[4835]: E0201 07:42:36.879050 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:42:36 crc kubenswrapper[4835]: E0201 07:42:36.880045 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:44:38.880002249 +0000 UTC m=+1352.000438713 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:42:38 crc kubenswrapper[4835]: I0201 07:42:38.568661 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:42:38 crc kubenswrapper[4835]: I0201 07:42:38.569177 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:42:38 crc kubenswrapper[4835]: I0201 07:42:38.569455 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:42:38 crc kubenswrapper[4835]: E0201 07:42:38.570247 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:42:48 crc kubenswrapper[4835]: I0201 07:42:48.568397 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:42:48 crc kubenswrapper[4835]: I0201 07:42:48.569085 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:42:48 crc kubenswrapper[4835]: E0201 07:42:48.569543 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:42:51 crc kubenswrapper[4835]: I0201 07:42:51.567995 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:42:51 crc kubenswrapper[4835]: I0201 07:42:51.568615 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:42:51 crc kubenswrapper[4835]: I0201 07:42:51.568881 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:42:51 crc kubenswrapper[4835]: E0201 07:42:51.569499 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:02 crc kubenswrapper[4835]: I0201 07:43:02.567259 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:43:02 crc kubenswrapper[4835]: I0201 07:43:02.567955 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:43:02 crc kubenswrapper[4835]: E0201 07:43:02.568311 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:43:05 crc kubenswrapper[4835]: I0201 07:43:05.568498 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:43:05 crc kubenswrapper[4835]: I0201 07:43:05.568964 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:43:05 crc kubenswrapper[4835]: I0201 07:43:05.569146 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:43:05 crc kubenswrapper[4835]: E0201 07:43:05.569701 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:14 crc kubenswrapper[4835]: I0201 07:43:14.568024 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:43:14 crc kubenswrapper[4835]: I0201 07:43:14.568780 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:43:14 crc kubenswrapper[4835]: E0201 07:43:14.569206 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:43:19 crc kubenswrapper[4835]: I0201 07:43:19.567482 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:43:19 crc kubenswrapper[4835]: I0201 07:43:19.567910 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:43:19 crc kubenswrapper[4835]: I0201 07:43:19.568040 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:43:19 crc kubenswrapper[4835]: E0201 07:43:19.568370 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:24 crc kubenswrapper[4835]: I0201 07:43:24.354008 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="fa5aff8be1093aa2c10f2b4af85287d1729e836661be58a64baa1c833802045c" exitCode=1 Feb 01 07:43:24 crc kubenswrapper[4835]: I0201 07:43:24.354060 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"fa5aff8be1093aa2c10f2b4af85287d1729e836661be58a64baa1c833802045c"} Feb 01 07:43:24 crc kubenswrapper[4835]: I0201 07:43:24.354536 4835 scope.go:117] "RemoveContainer" containerID="0a25ac97b5294b86a329b0b8a00b6a7ec519f70771d4bc4890be6a3eaa416540" Feb 01 07:43:24 crc kubenswrapper[4835]: I0201 07:43:24.355945 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:43:24 crc kubenswrapper[4835]: I0201 07:43:24.364702 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:43:24 crc kubenswrapper[4835]: I0201 07:43:24.364778 4835 scope.go:117] "RemoveContainer" containerID="fa5aff8be1093aa2c10f2b4af85287d1729e836661be58a64baa1c833802045c" Feb 01 07:43:24 crc kubenswrapper[4835]: I0201 07:43:24.364928 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:43:24 crc kubenswrapper[4835]: E0201 07:43:24.365803 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:25 crc kubenswrapper[4835]: I0201 07:43:25.192189 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:43:25 crc kubenswrapper[4835]: I0201 07:43:25.192296 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:43:29 crc kubenswrapper[4835]: I0201 07:43:29.567107 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:43:29 crc kubenswrapper[4835]: I0201 07:43:29.569562 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:43:29 crc kubenswrapper[4835]: E0201 07:43:29.570077 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:43:39 crc kubenswrapper[4835]: I0201 07:43:39.567701 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:43:39 crc kubenswrapper[4835]: I0201 07:43:39.568507 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:43:39 crc kubenswrapper[4835]: I0201 07:43:39.568565 4835 scope.go:117] "RemoveContainer" containerID="fa5aff8be1093aa2c10f2b4af85287d1729e836661be58a64baa1c833802045c" Feb 01 07:43:39 crc kubenswrapper[4835]: I0201 07:43:39.568720 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:43:39 crc kubenswrapper[4835]: E0201 07:43:39.774591 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:40 crc kubenswrapper[4835]: I0201 07:43:40.513147 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4"} Feb 01 07:43:40 crc kubenswrapper[4835]: I0201 07:43:40.514175 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:43:40 crc kubenswrapper[4835]: I0201 07:43:40.514299 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:43:40 crc kubenswrapper[4835]: I0201 07:43:40.514518 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:43:40 crc kubenswrapper[4835]: E0201 07:43:40.515207 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:44 crc kubenswrapper[4835]: I0201 07:43:44.567782 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:43:44 crc kubenswrapper[4835]: I0201 07:43:44.568685 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:43:44 crc kubenswrapper[4835]: E0201 07:43:44.569355 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:43:55 crc kubenswrapper[4835]: I0201 07:43:55.191939 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:43:55 crc kubenswrapper[4835]: I0201 07:43:55.192599 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:43:55 crc kubenswrapper[4835]: I0201 07:43:55.568825 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:43:55 crc kubenswrapper[4835]: I0201 07:43:55.568954 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:43:55 crc kubenswrapper[4835]: I0201 07:43:55.569133 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:43:55 crc kubenswrapper[4835]: E0201 07:43:55.569742 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:58 crc kubenswrapper[4835]: I0201 07:43:58.699388 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="419ab68c1eadc99bff71a26d28334bac6306a91472d2659f54afabe19795872b" exitCode=1 Feb 01 07:43:58 crc kubenswrapper[4835]: I0201 07:43:58.699466 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"419ab68c1eadc99bff71a26d28334bac6306a91472d2659f54afabe19795872b"} Feb 01 07:43:58 crc kubenswrapper[4835]: I0201 07:43:58.700651 4835 scope.go:117] "RemoveContainer" containerID="feb2c5663f63accc851097dd3f05b8f4f19e67efe2c719e8d3a4538c5779d9f1" Feb 01 07:43:58 crc kubenswrapper[4835]: I0201 07:43:58.701642 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:43:58 crc kubenswrapper[4835]: I0201 07:43:58.701761 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:43:58 crc kubenswrapper[4835]: I0201 07:43:58.701923 4835 scope.go:117] "RemoveContainer" containerID="419ab68c1eadc99bff71a26d28334bac6306a91472d2659f54afabe19795872b" Feb 01 07:43:58 crc kubenswrapper[4835]: I0201 07:43:58.701957 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:43:58 crc kubenswrapper[4835]: E0201 07:43:58.702537 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:59 crc kubenswrapper[4835]: I0201 07:43:59.566760 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:43:59 crc kubenswrapper[4835]: I0201 07:43:59.567073 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:43:59 crc kubenswrapper[4835]: E0201 07:43:59.567302 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:10 crc kubenswrapper[4835]: I0201 07:44:10.567013 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:10 crc kubenswrapper[4835]: I0201 07:44:10.567725 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:44:10 crc kubenswrapper[4835]: E0201 07:44:10.745455 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:10 crc kubenswrapper[4835]: I0201 07:44:10.901076 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d"} Feb 01 07:44:10 crc kubenswrapper[4835]: I0201 07:44:10.901806 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:44:10 crc kubenswrapper[4835]: I0201 07:44:10.902523 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:10 crc kubenswrapper[4835]: E0201 07:44:10.903154 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:11 crc kubenswrapper[4835]: I0201 07:44:11.931279 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" exitCode=1 Feb 01 07:44:11 crc kubenswrapper[4835]: I0201 07:44:11.931338 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d"} Feb 01 07:44:11 crc kubenswrapper[4835]: I0201 07:44:11.931379 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:44:11 crc kubenswrapper[4835]: I0201 07:44:11.932141 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:11 crc kubenswrapper[4835]: I0201 07:44:11.932176 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:44:11 crc kubenswrapper[4835]: E0201 07:44:11.932668 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:12 crc kubenswrapper[4835]: I0201 07:44:12.535640 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:44:12 crc kubenswrapper[4835]: I0201 07:44:12.567152 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:44:12 crc kubenswrapper[4835]: I0201 07:44:12.567239 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:44:12 crc kubenswrapper[4835]: I0201 07:44:12.567394 4835 scope.go:117] "RemoveContainer" containerID="419ab68c1eadc99bff71a26d28334bac6306a91472d2659f54afabe19795872b" Feb 01 07:44:12 crc kubenswrapper[4835]: I0201 07:44:12.567403 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:44:12 crc kubenswrapper[4835]: E0201 07:44:12.567992 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:44:12 crc kubenswrapper[4835]: I0201 07:44:12.945580 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:12 crc kubenswrapper[4835]: I0201 07:44:12.945619 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:44:12 crc kubenswrapper[4835]: E0201 07:44:12.945979 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:13 crc kubenswrapper[4835]: I0201 07:44:13.954734 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:13 crc kubenswrapper[4835]: I0201 07:44:13.954785 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:44:13 crc kubenswrapper[4835]: E0201 07:44:13.955246 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:24 crc kubenswrapper[4835]: I0201 07:44:24.582185 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:24 crc kubenswrapper[4835]: I0201 07:44:24.583314 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:44:24 crc kubenswrapper[4835]: E0201 07:44:24.583773 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:25 crc kubenswrapper[4835]: I0201 07:44:25.192309 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:44:25 crc kubenswrapper[4835]: I0201 07:44:25.192456 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:44:25 crc kubenswrapper[4835]: I0201 07:44:25.192520 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:44:25 crc kubenswrapper[4835]: I0201 07:44:25.193342 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a43725792d229350ec7471be026c4c547e893839692a410ac3e424adc0af5ced"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:44:25 crc kubenswrapper[4835]: I0201 07:44:25.193469 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://a43725792d229350ec7471be026c4c547e893839692a410ac3e424adc0af5ced" gracePeriod=600 Feb 01 07:44:26 crc kubenswrapper[4835]: I0201 07:44:26.089024 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="a43725792d229350ec7471be026c4c547e893839692a410ac3e424adc0af5ced" exitCode=0 Feb 01 07:44:26 crc kubenswrapper[4835]: I0201 07:44:26.089390 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"a43725792d229350ec7471be026c4c547e893839692a410ac3e424adc0af5ced"} Feb 01 07:44:26 crc kubenswrapper[4835]: I0201 07:44:26.089935 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e"} Feb 01 07:44:26 crc kubenswrapper[4835]: I0201 07:44:26.089986 4835 scope.go:117] "RemoveContainer" containerID="19428f932c6c98ecc149a201b9cb2f965faa26b06f4629d2e4af89e8080412f3" Feb 01 07:44:27 crc kubenswrapper[4835]: I0201 07:44:27.577306 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:44:27 crc kubenswrapper[4835]: I0201 07:44:27.577883 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:44:27 crc kubenswrapper[4835]: I0201 07:44:27.578039 4835 scope.go:117] "RemoveContainer" containerID="419ab68c1eadc99bff71a26d28334bac6306a91472d2659f54afabe19795872b" Feb 01 07:44:27 crc kubenswrapper[4835]: I0201 07:44:27.578053 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:44:28 crc kubenswrapper[4835]: I0201 07:44:28.119292 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2"} Feb 01 07:44:28 crc kubenswrapper[4835]: I0201 07:44:28.119869 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d"} Feb 01 07:44:28 crc kubenswrapper[4835]: E0201 07:44:28.929927 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1edd7394_0f8e_4271_8774_f228946e62f3.slice/crio-8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1edd7394_0f8e_4271_8774_f228946e62f3.slice/crio-conmon-8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328.scope\": RecentStats: unable to find data in memory cache]" Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145365 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" exitCode=1 Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145453 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" exitCode=1 Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145469 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" exitCode=1 Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145471 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2"} Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145556 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d"} Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145577 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328"} Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145596 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7"} Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145624 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.146684 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.146830 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.147071 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:44:29 crc kubenswrapper[4835]: E0201 07:44:29.147781 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.212477 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.264278 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:44:30 crc kubenswrapper[4835]: I0201 07:44:30.182940 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:44:30 crc kubenswrapper[4835]: I0201 07:44:30.183071 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:44:30 crc kubenswrapper[4835]: I0201 07:44:30.183251 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:44:30 crc kubenswrapper[4835]: E0201 07:44:30.185362 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:44:35 crc kubenswrapper[4835]: I0201 07:44:35.567776 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:35 crc kubenswrapper[4835]: I0201 07:44:35.568609 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:44:35 crc kubenswrapper[4835]: E0201 07:44:35.568964 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:36 crc kubenswrapper[4835]: E0201 07:44:36.899102 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:44:37 crc kubenswrapper[4835]: I0201 07:44:37.258446 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:44:38 crc kubenswrapper[4835]: I0201 07:44:38.974982 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:44:38 crc kubenswrapper[4835]: E0201 07:44:38.975130 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:44:38 crc kubenswrapper[4835]: E0201 07:44:38.975212 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:46:40.975191247 +0000 UTC m=+1474.095627681 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:44:45 crc kubenswrapper[4835]: I0201 07:44:45.567612 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:44:45 crc kubenswrapper[4835]: I0201 07:44:45.568372 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:44:45 crc kubenswrapper[4835]: I0201 07:44:45.568563 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:44:45 crc kubenswrapper[4835]: E0201 07:44:45.568976 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:44:46 crc kubenswrapper[4835]: I0201 07:44:46.567518 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:46 crc kubenswrapper[4835]: I0201 07:44:46.567944 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:44:46 crc kubenswrapper[4835]: E0201 07:44:46.568482 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:57 crc kubenswrapper[4835]: I0201 07:44:57.574616 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:57 crc kubenswrapper[4835]: I0201 07:44:57.575253 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:44:57 crc kubenswrapper[4835]: E0201 07:44:57.575724 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:59 crc kubenswrapper[4835]: I0201 07:44:59.567216 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:44:59 crc kubenswrapper[4835]: I0201 07:44:59.567390 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:44:59 crc kubenswrapper[4835]: I0201 07:44:59.567670 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:44:59 crc kubenswrapper[4835]: E0201 07:44:59.568187 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.160622 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5"] Feb 01 07:45:00 crc kubenswrapper[4835]: E0201 07:45:00.161127 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="extract-utilities" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.161155 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="extract-utilities" Feb 01 07:45:00 crc kubenswrapper[4835]: E0201 07:45:00.161180 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="extract-content" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.161189 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="extract-content" Feb 01 07:45:00 crc kubenswrapper[4835]: E0201 07:45:00.161218 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="registry-server" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.161228 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="registry-server" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.161373 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="registry-server" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.162013 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.164712 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.164993 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.166821 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5"] Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.336263 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84654a3b-8db7-4ec6-950c-14bec7a98590-secret-volume\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.336371 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b92zs\" (UniqueName: \"kubernetes.io/projected/84654a3b-8db7-4ec6-950c-14bec7a98590-kube-api-access-b92zs\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.336428 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84654a3b-8db7-4ec6-950c-14bec7a98590-config-volume\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.437691 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b92zs\" (UniqueName: \"kubernetes.io/projected/84654a3b-8db7-4ec6-950c-14bec7a98590-kube-api-access-b92zs\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.437805 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84654a3b-8db7-4ec6-950c-14bec7a98590-config-volume\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.437880 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84654a3b-8db7-4ec6-950c-14bec7a98590-secret-volume\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.439875 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84654a3b-8db7-4ec6-950c-14bec7a98590-config-volume\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.454219 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84654a3b-8db7-4ec6-950c-14bec7a98590-secret-volume\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.460332 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b92zs\" (UniqueName: \"kubernetes.io/projected/84654a3b-8db7-4ec6-950c-14bec7a98590-kube-api-access-b92zs\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.495276 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.809067 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5"] Feb 01 07:45:01 crc kubenswrapper[4835]: I0201 07:45:01.467856 4835 generic.go:334] "Generic (PLEG): container finished" podID="84654a3b-8db7-4ec6-950c-14bec7a98590" containerID="009401fd8cd37662006bfdceb1b612e942c8e587c0addc5645fd3075fa133198" exitCode=0 Feb 01 07:45:01 crc kubenswrapper[4835]: I0201 07:45:01.467952 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" event={"ID":"84654a3b-8db7-4ec6-950c-14bec7a98590","Type":"ContainerDied","Data":"009401fd8cd37662006bfdceb1b612e942c8e587c0addc5645fd3075fa133198"} Feb 01 07:45:01 crc kubenswrapper[4835]: I0201 07:45:01.468004 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" event={"ID":"84654a3b-8db7-4ec6-950c-14bec7a98590","Type":"ContainerStarted","Data":"8c95f7a05240cb9fddeb8f2d8bd71af84e0448e408020399d48d06806e40ec67"} Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.775513 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.874908 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b92zs\" (UniqueName: \"kubernetes.io/projected/84654a3b-8db7-4ec6-950c-14bec7a98590-kube-api-access-b92zs\") pod \"84654a3b-8db7-4ec6-950c-14bec7a98590\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.875134 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84654a3b-8db7-4ec6-950c-14bec7a98590-secret-volume\") pod \"84654a3b-8db7-4ec6-950c-14bec7a98590\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.875181 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84654a3b-8db7-4ec6-950c-14bec7a98590-config-volume\") pod \"84654a3b-8db7-4ec6-950c-14bec7a98590\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.875897 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84654a3b-8db7-4ec6-950c-14bec7a98590-config-volume" (OuterVolumeSpecName: "config-volume") pod "84654a3b-8db7-4ec6-950c-14bec7a98590" (UID: "84654a3b-8db7-4ec6-950c-14bec7a98590"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.881133 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84654a3b-8db7-4ec6-950c-14bec7a98590-kube-api-access-b92zs" (OuterVolumeSpecName: "kube-api-access-b92zs") pod "84654a3b-8db7-4ec6-950c-14bec7a98590" (UID: "84654a3b-8db7-4ec6-950c-14bec7a98590"). InnerVolumeSpecName "kube-api-access-b92zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.883552 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84654a3b-8db7-4ec6-950c-14bec7a98590-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "84654a3b-8db7-4ec6-950c-14bec7a98590" (UID: "84654a3b-8db7-4ec6-950c-14bec7a98590"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.977247 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84654a3b-8db7-4ec6-950c-14bec7a98590-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.977307 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84654a3b-8db7-4ec6-950c-14bec7a98590-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.977330 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b92zs\" (UniqueName: \"kubernetes.io/projected/84654a3b-8db7-4ec6-950c-14bec7a98590-kube-api-access-b92zs\") on node \"crc\" DevicePath \"\"" Feb 01 07:45:03 crc kubenswrapper[4835]: I0201 07:45:03.483150 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" event={"ID":"84654a3b-8db7-4ec6-950c-14bec7a98590","Type":"ContainerDied","Data":"8c95f7a05240cb9fddeb8f2d8bd71af84e0448e408020399d48d06806e40ec67"} Feb 01 07:45:03 crc kubenswrapper[4835]: I0201 07:45:03.483196 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:03 crc kubenswrapper[4835]: I0201 07:45:03.483209 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c95f7a05240cb9fddeb8f2d8bd71af84e0448e408020399d48d06806e40ec67" Feb 01 07:45:12 crc kubenswrapper[4835]: I0201 07:45:12.566580 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:45:12 crc kubenswrapper[4835]: I0201 07:45:12.567655 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:45:12 crc kubenswrapper[4835]: E0201 07:45:12.568265 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:45:14 crc kubenswrapper[4835]: I0201 07:45:14.567985 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:45:14 crc kubenswrapper[4835]: I0201 07:45:14.568548 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:45:14 crc kubenswrapper[4835]: I0201 07:45:14.568738 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:45:14 crc kubenswrapper[4835]: E0201 07:45:14.569241 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:45:17 crc kubenswrapper[4835]: I0201 07:45:17.086769 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/root-account-create-update-gmb7x"] Feb 01 07:45:17 crc kubenswrapper[4835]: I0201 07:45:17.110455 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/root-account-create-update-gmb7x"] Feb 01 07:45:17 crc kubenswrapper[4835]: I0201 07:45:17.582217 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a95fd7f-8f31-420b-a847-e13f61aa0ce9" path="/var/lib/kubelet/pods/5a95fd7f-8f31-420b-a847-e13f61aa0ce9/volumes" Feb 01 07:45:23 crc kubenswrapper[4835]: I0201 07:45:23.567545 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:45:23 crc kubenswrapper[4835]: I0201 07:45:23.568391 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:45:23 crc kubenswrapper[4835]: E0201 07:45:23.785903 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:45:24 crc kubenswrapper[4835]: I0201 07:45:24.675954 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06"} Feb 01 07:45:24 crc kubenswrapper[4835]: I0201 07:45:24.676922 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:45:24 crc kubenswrapper[4835]: E0201 07:45:24.677288 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:45:24 crc kubenswrapper[4835]: I0201 07:45:24.677379 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:45:25 crc kubenswrapper[4835]: I0201 07:45:25.686448 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:45:25 crc kubenswrapper[4835]: E0201 07:45:25.686663 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:45:27 crc kubenswrapper[4835]: I0201 07:45:27.574957 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:45:27 crc kubenswrapper[4835]: I0201 07:45:27.575504 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:45:27 crc kubenswrapper[4835]: I0201 07:45:27.575702 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:45:27 crc kubenswrapper[4835]: E0201 07:45:27.576227 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:45:30 crc kubenswrapper[4835]: I0201 07:45:30.540090 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:45:32 crc kubenswrapper[4835]: I0201 07:45:32.537748 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:45:33 crc kubenswrapper[4835]: I0201 07:45:33.538157 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.537674 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.538097 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.538990 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.539022 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.539073 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" gracePeriod=30 Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.541828 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:45:36 crc kubenswrapper[4835]: E0201 07:45:36.672273 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.788110 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" exitCode=0 Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.788167 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06"} Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.788248 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.789107 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.789185 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:45:36 crc kubenswrapper[4835]: E0201 07:45:36.789797 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:45:41 crc kubenswrapper[4835]: I0201 07:45:41.566735 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:45:41 crc kubenswrapper[4835]: I0201 07:45:41.567929 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:45:41 crc kubenswrapper[4835]: I0201 07:45:41.568114 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:45:41 crc kubenswrapper[4835]: E0201 07:45:41.568455 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:45:50 crc kubenswrapper[4835]: I0201 07:45:50.567228 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:45:50 crc kubenswrapper[4835]: I0201 07:45:50.567955 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:45:50 crc kubenswrapper[4835]: E0201 07:45:50.568357 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:45:56 crc kubenswrapper[4835]: I0201 07:45:56.567378 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:45:56 crc kubenswrapper[4835]: I0201 07:45:56.567848 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:45:56 crc kubenswrapper[4835]: I0201 07:45:56.567993 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:45:56 crc kubenswrapper[4835]: E0201 07:45:56.568316 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:46:00 crc kubenswrapper[4835]: E0201 07:46:00.967613 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1edd7394_0f8e_4271_8774_f228946e62f3.slice/crio-675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4.scope\": RecentStats: unable to find data in memory cache]" Feb 01 07:46:01 crc kubenswrapper[4835]: I0201 07:46:01.031399 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4" exitCode=1 Feb 01 07:46:01 crc kubenswrapper[4835]: I0201 07:46:01.031464 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4"} Feb 01 07:46:01 crc kubenswrapper[4835]: I0201 07:46:01.031501 4835 scope.go:117] "RemoveContainer" containerID="fa5aff8be1093aa2c10f2b4af85287d1729e836661be58a64baa1c833802045c" Feb 01 07:46:01 crc kubenswrapper[4835]: I0201 07:46:01.032158 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:46:01 crc kubenswrapper[4835]: I0201 07:46:01.032214 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:46:01 crc kubenswrapper[4835]: I0201 07:46:01.032236 4835 scope.go:117] "RemoveContainer" containerID="675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4" Feb 01 07:46:01 crc kubenswrapper[4835]: I0201 07:46:01.032315 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:46:01 crc kubenswrapper[4835]: E0201 07:46:01.032662 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:46:05 crc kubenswrapper[4835]: I0201 07:46:05.568081 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:46:05 crc kubenswrapper[4835]: I0201 07:46:05.568620 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:46:05 crc kubenswrapper[4835]: E0201 07:46:05.568830 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:46:08 crc kubenswrapper[4835]: I0201 07:46:08.199447 4835 scope.go:117] "RemoveContainer" containerID="4150461df03e979f73af252c924d3235e5873da5e6ee9fff2b41bd3c4a7515a0" Feb 01 07:46:12 crc kubenswrapper[4835]: I0201 07:46:12.568379 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:46:12 crc kubenswrapper[4835]: I0201 07:46:12.569122 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:46:12 crc kubenswrapper[4835]: I0201 07:46:12.569169 4835 scope.go:117] "RemoveContainer" containerID="675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4" Feb 01 07:46:12 crc kubenswrapper[4835]: I0201 07:46:12.569293 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:46:12 crc kubenswrapper[4835]: E0201 07:46:12.569918 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:46:19 crc kubenswrapper[4835]: I0201 07:46:19.566746 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:46:19 crc kubenswrapper[4835]: I0201 07:46:19.567577 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:46:19 crc kubenswrapper[4835]: E0201 07:46:19.568193 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:46:25 crc kubenswrapper[4835]: I0201 07:46:25.191745 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:46:25 crc kubenswrapper[4835]: I0201 07:46:25.192434 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:46:25 crc kubenswrapper[4835]: I0201 07:46:25.567518 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:46:25 crc kubenswrapper[4835]: I0201 07:46:25.567653 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:46:25 crc kubenswrapper[4835]: I0201 07:46:25.567714 4835 scope.go:117] "RemoveContainer" containerID="675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4" Feb 01 07:46:25 crc kubenswrapper[4835]: I0201 07:46:25.567884 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:46:25 crc kubenswrapper[4835]: E0201 07:46:25.779793 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:46:26 crc kubenswrapper[4835]: I0201 07:46:26.329860 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6"} Feb 01 07:46:26 crc kubenswrapper[4835]: I0201 07:46:26.331236 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:46:26 crc kubenswrapper[4835]: I0201 07:46:26.331363 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:46:26 crc kubenswrapper[4835]: I0201 07:46:26.331672 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:46:26 crc kubenswrapper[4835]: E0201 07:46:26.332243 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:46:29 crc kubenswrapper[4835]: I0201 07:46:29.049844 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-db-create-m9js9"] Feb 01 07:46:29 crc kubenswrapper[4835]: I0201 07:46:29.060703 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-d22d-account-create-update-clkrg"] Feb 01 07:46:29 crc kubenswrapper[4835]: I0201 07:46:29.071792 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-db-create-m9js9"] Feb 01 07:46:29 crc kubenswrapper[4835]: I0201 07:46:29.082791 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-d22d-account-create-update-clkrg"] Feb 01 07:46:29 crc kubenswrapper[4835]: I0201 07:46:29.581250 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="766b4c0a-da92-4fe7-bf95-4a39f3fafafe" path="/var/lib/kubelet/pods/766b4c0a-da92-4fe7-bf95-4a39f3fafafe/volumes" Feb 01 07:46:29 crc kubenswrapper[4835]: I0201 07:46:29.582059 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f574f591-2220-4cd1-88f7-ac79ac332aae" path="/var/lib/kubelet/pods/f574f591-2220-4cd1-88f7-ac79ac332aae/volumes" Feb 01 07:46:30 crc kubenswrapper[4835]: I0201 07:46:30.567855 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:46:30 crc kubenswrapper[4835]: I0201 07:46:30.567899 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:46:30 crc kubenswrapper[4835]: E0201 07:46:30.568300 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:46:38 crc kubenswrapper[4835]: I0201 07:46:38.567277 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:46:38 crc kubenswrapper[4835]: I0201 07:46:38.567955 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:46:38 crc kubenswrapper[4835]: I0201 07:46:38.568071 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:46:38 crc kubenswrapper[4835]: E0201 07:46:38.568386 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:46:40 crc kubenswrapper[4835]: E0201 07:46:40.259782 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:46:40 crc kubenswrapper[4835]: I0201 07:46:40.462919 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:46:41 crc kubenswrapper[4835]: I0201 07:46:41.055750 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:46:41 crc kubenswrapper[4835]: E0201 07:46:41.055917 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:46:41 crc kubenswrapper[4835]: E0201 07:46:41.055974 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:48:43.05595433 +0000 UTC m=+1596.176390774 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:46:41 crc kubenswrapper[4835]: I0201 07:46:41.566970 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:46:41 crc kubenswrapper[4835]: I0201 07:46:41.567021 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:46:41 crc kubenswrapper[4835]: E0201 07:46:41.567588 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:46:49 crc kubenswrapper[4835]: I0201 07:46:49.046009 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-5w5sr"] Feb 01 07:46:49 crc kubenswrapper[4835]: I0201 07:46:49.058358 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-5w5sr"] Feb 01 07:46:49 crc kubenswrapper[4835]: I0201 07:46:49.579198 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd1d09a3-13ff-43c0-835a-de9a6f9b5103" path="/var/lib/kubelet/pods/cd1d09a3-13ff-43c0-835a-de9a6f9b5103/volumes" Feb 01 07:46:51 crc kubenswrapper[4835]: I0201 07:46:51.567361 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:46:51 crc kubenswrapper[4835]: I0201 07:46:51.567898 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:46:51 crc kubenswrapper[4835]: I0201 07:46:51.568095 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:46:51 crc kubenswrapper[4835]: E0201 07:46:51.568623 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:46:55 crc kubenswrapper[4835]: I0201 07:46:55.038016 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-6pjmn"] Feb 01 07:46:55 crc kubenswrapper[4835]: I0201 07:46:55.048222 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-6pjmn"] Feb 01 07:46:55 crc kubenswrapper[4835]: I0201 07:46:55.191940 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:46:55 crc kubenswrapper[4835]: I0201 07:46:55.192025 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:46:55 crc kubenswrapper[4835]: I0201 07:46:55.578981 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf026661-c9af-420a-8984-f7fbe212e592" path="/var/lib/kubelet/pods/bf026661-c9af-420a-8984-f7fbe212e592/volumes" Feb 01 07:46:56 crc kubenswrapper[4835]: I0201 07:46:56.566739 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:46:56 crc kubenswrapper[4835]: I0201 07:46:56.566766 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:46:56 crc kubenswrapper[4835]: E0201 07:46:56.567073 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:46:58 crc kubenswrapper[4835]: I0201 07:46:58.633361 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7" exitCode=1 Feb 01 07:46:58 crc kubenswrapper[4835]: I0201 07:46:58.633426 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7"} Feb 01 07:46:58 crc kubenswrapper[4835]: I0201 07:46:58.633876 4835 scope.go:117] "RemoveContainer" containerID="419ab68c1eadc99bff71a26d28334bac6306a91472d2659f54afabe19795872b" Feb 01 07:46:58 crc kubenswrapper[4835]: I0201 07:46:58.635010 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:46:58 crc kubenswrapper[4835]: I0201 07:46:58.635163 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:46:58 crc kubenswrapper[4835]: I0201 07:46:58.635317 4835 scope.go:117] "RemoveContainer" containerID="a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7" Feb 01 07:46:58 crc kubenswrapper[4835]: I0201 07:46:58.635361 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:46:58 crc kubenswrapper[4835]: E0201 07:46:58.635962 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:47:07 crc kubenswrapper[4835]: I0201 07:47:07.574549 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:47:07 crc kubenswrapper[4835]: I0201 07:47:07.575160 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:47:07 crc kubenswrapper[4835]: E0201 07:47:07.575642 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:47:08 crc kubenswrapper[4835]: I0201 07:47:08.266257 4835 scope.go:117] "RemoveContainer" containerID="eabeabeae4f73ee57a400f521880f710c03aa93decaac629af5189bf021874a3" Feb 01 07:47:08 crc kubenswrapper[4835]: I0201 07:47:08.328837 4835 scope.go:117] "RemoveContainer" containerID="a06f9b42349fa2ea28d87918e953134cff78d85714b4da730fc4895d65231d70" Feb 01 07:47:08 crc kubenswrapper[4835]: I0201 07:47:08.376951 4835 scope.go:117] "RemoveContainer" containerID="fe725302a8ffa5be3e180ac6b253d15da455fbca578acdea4628b374a3cde003" Feb 01 07:47:08 crc kubenswrapper[4835]: I0201 07:47:08.402075 4835 scope.go:117] "RemoveContainer" containerID="215269eb271992c8cbc8e79c691e2434a7dce5223c9258cc1ad2fca20f897f92" Feb 01 07:47:12 crc kubenswrapper[4835]: I0201 07:47:12.567847 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:47:12 crc kubenswrapper[4835]: I0201 07:47:12.568318 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:47:12 crc kubenswrapper[4835]: I0201 07:47:12.568502 4835 scope.go:117] "RemoveContainer" containerID="a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7" Feb 01 07:47:12 crc kubenswrapper[4835]: I0201 07:47:12.568518 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:47:12 crc kubenswrapper[4835]: E0201 07:47:12.569141 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:47:22 crc kubenswrapper[4835]: I0201 07:47:22.567003 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:47:22 crc kubenswrapper[4835]: I0201 07:47:22.567366 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:47:22 crc kubenswrapper[4835]: E0201 07:47:22.567776 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:47:24 crc kubenswrapper[4835]: I0201 07:47:24.568080 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:47:24 crc kubenswrapper[4835]: I0201 07:47:24.568660 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:47:24 crc kubenswrapper[4835]: I0201 07:47:24.568902 4835 scope.go:117] "RemoveContainer" containerID="a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7" Feb 01 07:47:24 crc kubenswrapper[4835]: I0201 07:47:24.568927 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:47:24 crc kubenswrapper[4835]: E0201 07:47:24.569651 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.191611 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.191733 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.191813 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.192864 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.192978 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" gracePeriod=600 Feb 01 07:47:25 crc kubenswrapper[4835]: E0201 07:47:25.332545 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.900539 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" exitCode=0 Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.900609 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e"} Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.900682 4835 scope.go:117] "RemoveContainer" containerID="a43725792d229350ec7471be026c4c547e893839692a410ac3e424adc0af5ced" Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.901459 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:47:25 crc kubenswrapper[4835]: E0201 07:47:25.901876 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:47:29 crc kubenswrapper[4835]: I0201 07:47:29.059550 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-db-create-ddqhc"] Feb 01 07:47:29 crc kubenswrapper[4835]: I0201 07:47:29.067582 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv"] Feb 01 07:47:29 crc kubenswrapper[4835]: I0201 07:47:29.077071 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv"] Feb 01 07:47:29 crc kubenswrapper[4835]: I0201 07:47:29.083500 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-db-create-ddqhc"] Feb 01 07:47:29 crc kubenswrapper[4835]: I0201 07:47:29.584001 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26692abf-b5f8-4461-992d-508cb9b73bb2" path="/var/lib/kubelet/pods/26692abf-b5f8-4461-992d-508cb9b73bb2/volumes" Feb 01 07:47:29 crc kubenswrapper[4835]: I0201 07:47:29.585384 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="545f3a5d-c02e-45f2-aba5-ea50bf4fccd0" path="/var/lib/kubelet/pods/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0/volumes" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.082986 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vxqbf"] Feb 01 07:47:30 crc kubenswrapper[4835]: E0201 07:47:30.083472 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84654a3b-8db7-4ec6-950c-14bec7a98590" containerName="collect-profiles" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.083494 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="84654a3b-8db7-4ec6-950c-14bec7a98590" containerName="collect-profiles" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.083777 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="84654a3b-8db7-4ec6-950c-14bec7a98590" containerName="collect-profiles" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.085791 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.098649 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxqbf"] Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.255314 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk8vs\" (UniqueName: \"kubernetes.io/projected/21d464c1-793a-4b74-af45-55a092004f64-kube-api-access-xk8vs\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.255556 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-catalog-content\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.255945 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-utilities\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.357327 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-catalog-content\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.357440 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-utilities\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.357484 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk8vs\" (UniqueName: \"kubernetes.io/projected/21d464c1-793a-4b74-af45-55a092004f64-kube-api-access-xk8vs\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.357944 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-catalog-content\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.358185 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-utilities\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.385947 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk8vs\" (UniqueName: \"kubernetes.io/projected/21d464c1-793a-4b74-af45-55a092004f64-kube-api-access-xk8vs\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.427306 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.881983 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxqbf"] Feb 01 07:47:30 crc kubenswrapper[4835]: W0201 07:47:30.886124 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21d464c1_793a_4b74_af45_55a092004f64.slice/crio-41ddcd108b26975c17744312e6a174235fa49a681da173a14f0314dcfb971b52 WatchSource:0}: Error finding container 41ddcd108b26975c17744312e6a174235fa49a681da173a14f0314dcfb971b52: Status 404 returned error can't find the container with id 41ddcd108b26975c17744312e6a174235fa49a681da173a14f0314dcfb971b52 Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.952294 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxqbf" event={"ID":"21d464c1-793a-4b74-af45-55a092004f64","Type":"ContainerStarted","Data":"41ddcd108b26975c17744312e6a174235fa49a681da173a14f0314dcfb971b52"} Feb 01 07:47:31 crc kubenswrapper[4835]: I0201 07:47:31.963523 4835 generic.go:334] "Generic (PLEG): container finished" podID="21d464c1-793a-4b74-af45-55a092004f64" containerID="d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6" exitCode=0 Feb 01 07:47:31 crc kubenswrapper[4835]: I0201 07:47:31.963585 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxqbf" event={"ID":"21d464c1-793a-4b74-af45-55a092004f64","Type":"ContainerDied","Data":"d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6"} Feb 01 07:47:31 crc kubenswrapper[4835]: I0201 07:47:31.966868 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 07:47:32 crc kubenswrapper[4835]: I0201 07:47:32.975666 4835 generic.go:334] "Generic (PLEG): container finished" podID="21d464c1-793a-4b74-af45-55a092004f64" containerID="274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de" exitCode=0 Feb 01 07:47:32 crc kubenswrapper[4835]: I0201 07:47:32.975813 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxqbf" event={"ID":"21d464c1-793a-4b74-af45-55a092004f64","Type":"ContainerDied","Data":"274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de"} Feb 01 07:47:33 crc kubenswrapper[4835]: I0201 07:47:33.989722 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxqbf" event={"ID":"21d464c1-793a-4b74-af45-55a092004f64","Type":"ContainerStarted","Data":"0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a"} Feb 01 07:47:34 crc kubenswrapper[4835]: I0201 07:47:34.566650 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:47:34 crc kubenswrapper[4835]: I0201 07:47:34.566698 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:47:34 crc kubenswrapper[4835]: E0201 07:47:34.567086 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:47:36 crc kubenswrapper[4835]: I0201 07:47:36.567853 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:47:36 crc kubenswrapper[4835]: I0201 07:47:36.568270 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:47:36 crc kubenswrapper[4835]: I0201 07:47:36.568386 4835 scope.go:117] "RemoveContainer" containerID="a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7" Feb 01 07:47:36 crc kubenswrapper[4835]: I0201 07:47:36.568398 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:47:36 crc kubenswrapper[4835]: E0201 07:47:36.568772 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:47:40 crc kubenswrapper[4835]: I0201 07:47:40.428552 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:40 crc kubenswrapper[4835]: I0201 07:47:40.428978 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:40 crc kubenswrapper[4835]: I0201 07:47:40.539777 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:40 crc kubenswrapper[4835]: I0201 07:47:40.565610 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vxqbf" podStartSLOduration=9.142474894 podStartE2EDuration="10.565592744s" podCreationTimestamp="2026-02-01 07:47:30 +0000 UTC" firstStartedPulling="2026-02-01 07:47:31.966456788 +0000 UTC m=+1525.086893262" lastFinishedPulling="2026-02-01 07:47:33.389574648 +0000 UTC m=+1526.510011112" observedRunningTime="2026-02-01 07:47:34.018608569 +0000 UTC m=+1527.139045033" watchObservedRunningTime="2026-02-01 07:47:40.565592744 +0000 UTC m=+1533.686029188" Feb 01 07:47:40 crc kubenswrapper[4835]: I0201 07:47:40.567655 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:47:40 crc kubenswrapper[4835]: E0201 07:47:40.568080 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:47:41 crc kubenswrapper[4835]: I0201 07:47:41.114910 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:41 crc kubenswrapper[4835]: I0201 07:47:41.183382 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxqbf"] Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.079631 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vxqbf" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="registry-server" containerID="cri-o://0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a" gracePeriod=2 Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.533386 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.696452 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-utilities\") pod \"21d464c1-793a-4b74-af45-55a092004f64\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.696572 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk8vs\" (UniqueName: \"kubernetes.io/projected/21d464c1-793a-4b74-af45-55a092004f64-kube-api-access-xk8vs\") pod \"21d464c1-793a-4b74-af45-55a092004f64\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.696681 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-catalog-content\") pod \"21d464c1-793a-4b74-af45-55a092004f64\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.697758 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-utilities" (OuterVolumeSpecName: "utilities") pod "21d464c1-793a-4b74-af45-55a092004f64" (UID: "21d464c1-793a-4b74-af45-55a092004f64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.699090 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.703567 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d464c1-793a-4b74-af45-55a092004f64-kube-api-access-xk8vs" (OuterVolumeSpecName: "kube-api-access-xk8vs") pod "21d464c1-793a-4b74-af45-55a092004f64" (UID: "21d464c1-793a-4b74-af45-55a092004f64"). InnerVolumeSpecName "kube-api-access-xk8vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.724596 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21d464c1-793a-4b74-af45-55a092004f64" (UID: "21d464c1-793a-4b74-af45-55a092004f64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.801081 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk8vs\" (UniqueName: \"kubernetes.io/projected/21d464c1-793a-4b74-af45-55a092004f64-kube-api-access-xk8vs\") on node \"crc\" DevicePath \"\"" Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.801117 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.097397 4835 generic.go:334] "Generic (PLEG): container finished" podID="21d464c1-793a-4b74-af45-55a092004f64" containerID="0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a" exitCode=0 Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.097507 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxqbf" event={"ID":"21d464c1-793a-4b74-af45-55a092004f64","Type":"ContainerDied","Data":"0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a"} Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.097529 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.097573 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxqbf" event={"ID":"21d464c1-793a-4b74-af45-55a092004f64","Type":"ContainerDied","Data":"41ddcd108b26975c17744312e6a174235fa49a681da173a14f0314dcfb971b52"} Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.097632 4835 scope.go:117] "RemoveContainer" containerID="0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.144308 4835 scope.go:117] "RemoveContainer" containerID="274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.161534 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxqbf"] Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.176532 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxqbf"] Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.183763 4835 scope.go:117] "RemoveContainer" containerID="d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.246125 4835 scope.go:117] "RemoveContainer" containerID="0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a" Feb 01 07:47:44 crc kubenswrapper[4835]: E0201 07:47:44.246759 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a\": container with ID starting with 0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a not found: ID does not exist" containerID="0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.246872 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a"} err="failed to get container status \"0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a\": rpc error: code = NotFound desc = could not find container \"0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a\": container with ID starting with 0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a not found: ID does not exist" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.246926 4835 scope.go:117] "RemoveContainer" containerID="274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de" Feb 01 07:47:44 crc kubenswrapper[4835]: E0201 07:47:44.247265 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de\": container with ID starting with 274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de not found: ID does not exist" containerID="274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.247324 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de"} err="failed to get container status \"274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de\": rpc error: code = NotFound desc = could not find container \"274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de\": container with ID starting with 274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de not found: ID does not exist" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.247357 4835 scope.go:117] "RemoveContainer" containerID="d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6" Feb 01 07:47:44 crc kubenswrapper[4835]: E0201 07:47:44.247683 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6\": container with ID starting with d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6 not found: ID does not exist" containerID="d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.247728 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6"} err="failed to get container status \"d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6\": rpc error: code = NotFound desc = could not find container \"d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6\": container with ID starting with d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6 not found: ID does not exist" Feb 01 07:47:45 crc kubenswrapper[4835]: I0201 07:47:45.600188 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d464c1-793a-4b74-af45-55a092004f64" path="/var/lib/kubelet/pods/21d464c1-793a-4b74-af45-55a092004f64/volumes" Feb 01 07:47:47 crc kubenswrapper[4835]: I0201 07:47:47.575184 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:47:47 crc kubenswrapper[4835]: I0201 07:47:47.575675 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:47:47 crc kubenswrapper[4835]: E0201 07:47:47.576064 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:47:48 crc kubenswrapper[4835]: I0201 07:47:48.567633 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:47:48 crc kubenswrapper[4835]: I0201 07:47:48.567778 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:47:48 crc kubenswrapper[4835]: I0201 07:47:48.568142 4835 scope.go:117] "RemoveContainer" containerID="a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7" Feb 01 07:47:48 crc kubenswrapper[4835]: I0201 07:47:48.568181 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:47:48 crc kubenswrapper[4835]: E0201 07:47:48.809082 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:47:49 crc kubenswrapper[4835]: I0201 07:47:49.157307 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea"} Feb 01 07:47:49 crc kubenswrapper[4835]: I0201 07:47:49.158387 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:47:49 crc kubenswrapper[4835]: I0201 07:47:49.158592 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:47:49 crc kubenswrapper[4835]: I0201 07:47:49.158826 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:47:49 crc kubenswrapper[4835]: E0201 07:47:49.159629 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.508900 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-94kkf"] Feb 01 07:47:50 crc kubenswrapper[4835]: E0201 07:47:50.509362 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="extract-utilities" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.509380 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="extract-utilities" Feb 01 07:47:50 crc kubenswrapper[4835]: E0201 07:47:50.509457 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="extract-content" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.509469 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="extract-content" Feb 01 07:47:50 crc kubenswrapper[4835]: E0201 07:47:50.509484 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="registry-server" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.509492 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="registry-server" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.509690 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="registry-server" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.512757 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.521857 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94kkf"] Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.626791 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-utilities\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.626883 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgd9b\" (UniqueName: \"kubernetes.io/projected/b01afbf3-db38-46a9-a5f4-bb290653ec52-kube-api-access-hgd9b\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.627012 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-catalog-content\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.728285 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-catalog-content\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.728402 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-utilities\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.728444 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgd9b\" (UniqueName: \"kubernetes.io/projected/b01afbf3-db38-46a9-a5f4-bb290653ec52-kube-api-access-hgd9b\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.728908 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-utilities\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.729150 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-catalog-content\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.760446 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgd9b\" (UniqueName: \"kubernetes.io/projected/b01afbf3-db38-46a9-a5f4-bb290653ec52-kube-api-access-hgd9b\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.882134 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:51 crc kubenswrapper[4835]: W0201 07:47:51.336602 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb01afbf3_db38_46a9_a5f4_bb290653ec52.slice/crio-c23bed94c7caa98a2b02390bf50f9b29f4693ad660c026f33aa2ba62a7995cb8 WatchSource:0}: Error finding container c23bed94c7caa98a2b02390bf50f9b29f4693ad660c026f33aa2ba62a7995cb8: Status 404 returned error can't find the container with id c23bed94c7caa98a2b02390bf50f9b29f4693ad660c026f33aa2ba62a7995cb8 Feb 01 07:47:51 crc kubenswrapper[4835]: I0201 07:47:51.337450 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94kkf"] Feb 01 07:47:52 crc kubenswrapper[4835]: I0201 07:47:52.192275 4835 generic.go:334] "Generic (PLEG): container finished" podID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerID="33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f" exitCode=0 Feb 01 07:47:52 crc kubenswrapper[4835]: I0201 07:47:52.192362 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94kkf" event={"ID":"b01afbf3-db38-46a9-a5f4-bb290653ec52","Type":"ContainerDied","Data":"33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f"} Feb 01 07:47:52 crc kubenswrapper[4835]: I0201 07:47:52.192610 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94kkf" event={"ID":"b01afbf3-db38-46a9-a5f4-bb290653ec52","Type":"ContainerStarted","Data":"c23bed94c7caa98a2b02390bf50f9b29f4693ad660c026f33aa2ba62a7995cb8"} Feb 01 07:47:53 crc kubenswrapper[4835]: I0201 07:47:53.203882 4835 generic.go:334] "Generic (PLEG): container finished" podID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerID="0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4" exitCode=0 Feb 01 07:47:53 crc kubenswrapper[4835]: I0201 07:47:53.204126 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94kkf" event={"ID":"b01afbf3-db38-46a9-a5f4-bb290653ec52","Type":"ContainerDied","Data":"0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4"} Feb 01 07:47:54 crc kubenswrapper[4835]: I0201 07:47:54.214533 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94kkf" event={"ID":"b01afbf3-db38-46a9-a5f4-bb290653ec52","Type":"ContainerStarted","Data":"94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30"} Feb 01 07:47:54 crc kubenswrapper[4835]: I0201 07:47:54.237498 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-94kkf" podStartSLOduration=2.822507185 podStartE2EDuration="4.237471021s" podCreationTimestamp="2026-02-01 07:47:50 +0000 UTC" firstStartedPulling="2026-02-01 07:47:52.194861772 +0000 UTC m=+1545.315298236" lastFinishedPulling="2026-02-01 07:47:53.609825598 +0000 UTC m=+1546.730262072" observedRunningTime="2026-02-01 07:47:54.236172697 +0000 UTC m=+1547.356609151" watchObservedRunningTime="2026-02-01 07:47:54.237471021 +0000 UTC m=+1547.357907485" Feb 01 07:47:54 crc kubenswrapper[4835]: I0201 07:47:54.566868 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:47:54 crc kubenswrapper[4835]: E0201 07:47:54.567340 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:47:58 crc kubenswrapper[4835]: I0201 07:47:58.567060 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:47:58 crc kubenswrapper[4835]: I0201 07:47:58.567492 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:47:58 crc kubenswrapper[4835]: E0201 07:47:58.568005 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:48:00 crc kubenswrapper[4835]: I0201 07:48:00.882772 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:48:00 crc kubenswrapper[4835]: I0201 07:48:00.884253 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:48:00 crc kubenswrapper[4835]: I0201 07:48:00.963437 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:48:01 crc kubenswrapper[4835]: I0201 07:48:01.352590 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:48:01 crc kubenswrapper[4835]: I0201 07:48:01.424804 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94kkf"] Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.296445 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-94kkf" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="registry-server" containerID="cri-o://94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30" gracePeriod=2 Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.566938 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.567313 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.567462 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:48:03 crc kubenswrapper[4835]: E0201 07:48:03.567875 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.771603 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.842103 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-utilities\") pod \"b01afbf3-db38-46a9-a5f4-bb290653ec52\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.842214 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-catalog-content\") pod \"b01afbf3-db38-46a9-a5f4-bb290653ec52\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.842465 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgd9b\" (UniqueName: \"kubernetes.io/projected/b01afbf3-db38-46a9-a5f4-bb290653ec52-kube-api-access-hgd9b\") pod \"b01afbf3-db38-46a9-a5f4-bb290653ec52\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.844145 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-utilities" (OuterVolumeSpecName: "utilities") pod "b01afbf3-db38-46a9-a5f4-bb290653ec52" (UID: "b01afbf3-db38-46a9-a5f4-bb290653ec52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.853962 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b01afbf3-db38-46a9-a5f4-bb290653ec52-kube-api-access-hgd9b" (OuterVolumeSpecName: "kube-api-access-hgd9b") pod "b01afbf3-db38-46a9-a5f4-bb290653ec52" (UID: "b01afbf3-db38-46a9-a5f4-bb290653ec52"). InnerVolumeSpecName "kube-api-access-hgd9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.944548 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgd9b\" (UniqueName: \"kubernetes.io/projected/b01afbf3-db38-46a9-a5f4-bb290653ec52-kube-api-access-hgd9b\") on node \"crc\" DevicePath \"\"" Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.944588 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.180962 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b01afbf3-db38-46a9-a5f4-bb290653ec52" (UID: "b01afbf3-db38-46a9-a5f4-bb290653ec52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.248988 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.312806 4835 generic.go:334] "Generic (PLEG): container finished" podID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerID="94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30" exitCode=0 Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.312877 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94kkf" event={"ID":"b01afbf3-db38-46a9-a5f4-bb290653ec52","Type":"ContainerDied","Data":"94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30"} Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.312931 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94kkf" event={"ID":"b01afbf3-db38-46a9-a5f4-bb290653ec52","Type":"ContainerDied","Data":"c23bed94c7caa98a2b02390bf50f9b29f4693ad660c026f33aa2ba62a7995cb8"} Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.312964 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.312982 4835 scope.go:117] "RemoveContainer" containerID="94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.352376 4835 scope.go:117] "RemoveContainer" containerID="0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.386277 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94kkf"] Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.393888 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-94kkf"] Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.408167 4835 scope.go:117] "RemoveContainer" containerID="33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.430669 4835 scope.go:117] "RemoveContainer" containerID="94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30" Feb 01 07:48:04 crc kubenswrapper[4835]: E0201 07:48:04.432285 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30\": container with ID starting with 94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30 not found: ID does not exist" containerID="94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.432373 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30"} err="failed to get container status \"94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30\": rpc error: code = NotFound desc = could not find container \"94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30\": container with ID starting with 94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30 not found: ID does not exist" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.432449 4835 scope.go:117] "RemoveContainer" containerID="0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4" Feb 01 07:48:04 crc kubenswrapper[4835]: E0201 07:48:04.433027 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4\": container with ID starting with 0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4 not found: ID does not exist" containerID="0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.433058 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4"} err="failed to get container status \"0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4\": rpc error: code = NotFound desc = could not find container \"0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4\": container with ID starting with 0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4 not found: ID does not exist" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.433078 4835 scope.go:117] "RemoveContainer" containerID="33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f" Feb 01 07:48:04 crc kubenswrapper[4835]: E0201 07:48:04.433646 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f\": container with ID starting with 33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f not found: ID does not exist" containerID="33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.433670 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f"} err="failed to get container status \"33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f\": rpc error: code = NotFound desc = could not find container \"33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f\": container with ID starting with 33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f not found: ID does not exist" Feb 01 07:48:05 crc kubenswrapper[4835]: I0201 07:48:05.567951 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:48:05 crc kubenswrapper[4835]: E0201 07:48:05.568370 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:48:05 crc kubenswrapper[4835]: I0201 07:48:05.583841 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" path="/var/lib/kubelet/pods/b01afbf3-db38-46a9-a5f4-bb290653ec52/volumes" Feb 01 07:48:08 crc kubenswrapper[4835]: I0201 07:48:08.511802 4835 scope.go:117] "RemoveContainer" containerID="65cf85b1dd72d5635988e485f041129154e6406263a9f9918622bbd9bb651c81" Feb 01 07:48:08 crc kubenswrapper[4835]: I0201 07:48:08.536122 4835 scope.go:117] "RemoveContainer" containerID="212958e93fcbd8f3fdf3afad7d233490e91ef9f2cf2380e3ac58f8cc1722a0b6" Feb 01 07:48:13 crc kubenswrapper[4835]: I0201 07:48:13.567319 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:48:13 crc kubenswrapper[4835]: I0201 07:48:13.567361 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:48:13 crc kubenswrapper[4835]: E0201 07:48:13.567697 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:48:14 crc kubenswrapper[4835]: I0201 07:48:14.568130 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:48:14 crc kubenswrapper[4835]: I0201 07:48:14.568263 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:48:14 crc kubenswrapper[4835]: I0201 07:48:14.568470 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:48:14 crc kubenswrapper[4835]: E0201 07:48:14.568941 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:48:20 crc kubenswrapper[4835]: I0201 07:48:20.566487 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:48:20 crc kubenswrapper[4835]: E0201 07:48:20.567513 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:48:25 crc kubenswrapper[4835]: I0201 07:48:25.567722 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:48:25 crc kubenswrapper[4835]: I0201 07:48:25.568303 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:48:25 crc kubenswrapper[4835]: I0201 07:48:25.568446 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:48:25 crc kubenswrapper[4835]: E0201 07:48:25.568878 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:48:27 crc kubenswrapper[4835]: I0201 07:48:27.576956 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:48:27 crc kubenswrapper[4835]: I0201 07:48:27.577351 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:48:27 crc kubenswrapper[4835]: E0201 07:48:27.577779 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.259877 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260234 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-server" containerID="cri-o://abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260367 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-replicator" containerID="cri-o://57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260333 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-auditor" containerID="cri-o://115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260401 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" containerID="cri-o://24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260452 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-reaper" containerID="cri-o://c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260389 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="rsync" containerID="cri-o://1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260494 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-server" containerID="cri-o://e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260505 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-auditor" containerID="cri-o://c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260444 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-server" containerID="cri-o://eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260538 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-auditor" containerID="cri-o://3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260342 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="swift-recon-cron" containerID="cri-o://c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260593 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" containerID="cri-o://2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546023 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546376 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546386 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546393 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546183 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546518 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546399 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546563 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546590 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546589 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546617 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546618 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546631 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546644 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546669 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546693 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546597 4835 scope.go:117] "RemoveContainer" containerID="a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7" Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.607341 4835 scope.go:117] "RemoveContainer" containerID="675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4" Feb 01 07:48:30 crc kubenswrapper[4835]: I0201 07:48:30.566644 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876" exitCode=0 Feb 01 07:48:30 crc kubenswrapper[4835]: I0201 07:48:30.566706 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4" exitCode=0 Feb 01 07:48:30 crc kubenswrapper[4835]: I0201 07:48:30.566726 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541" exitCode=0 Feb 01 07:48:30 crc kubenswrapper[4835]: I0201 07:48:30.566724 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876"} Feb 01 07:48:30 crc kubenswrapper[4835]: I0201 07:48:30.566787 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4"} Feb 01 07:48:30 crc kubenswrapper[4835]: I0201 07:48:30.566816 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541"} Feb 01 07:48:35 crc kubenswrapper[4835]: I0201 07:48:35.567768 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:48:35 crc kubenswrapper[4835]: E0201 07:48:35.569002 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:48:42 crc kubenswrapper[4835]: I0201 07:48:42.566806 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:48:42 crc kubenswrapper[4835]: I0201 07:48:42.567293 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:48:42 crc kubenswrapper[4835]: E0201 07:48:42.567565 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:48:43 crc kubenswrapper[4835]: I0201 07:48:43.062434 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:48:43 crc kubenswrapper[4835]: E0201 07:48:43.062876 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:48:43 crc kubenswrapper[4835]: E0201 07:48:43.062933 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:50:45.062912622 +0000 UTC m=+1718.183349056 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:48:43 crc kubenswrapper[4835]: E0201 07:48:43.464534 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:48:43 crc kubenswrapper[4835]: I0201 07:48:43.697634 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:48:46 crc kubenswrapper[4835]: I0201 07:48:46.567349 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:48:46 crc kubenswrapper[4835]: E0201 07:48:46.568267 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:48:53 crc kubenswrapper[4835]: I0201 07:48:53.566802 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:48:53 crc kubenswrapper[4835]: I0201 07:48:53.567303 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:48:53 crc kubenswrapper[4835]: E0201 07:48:53.567548 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.630527 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p"] Feb 01 07:48:54 crc kubenswrapper[4835]: E0201 07:48:54.630844 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="extract-utilities" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.630859 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="extract-utilities" Feb 01 07:48:54 crc kubenswrapper[4835]: E0201 07:48:54.630888 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="registry-server" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.630898 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="registry-server" Feb 01 07:48:54 crc kubenswrapper[4835]: E0201 07:48:54.630926 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="extract-content" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.630935 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="extract-content" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.631115 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="registry-server" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.632043 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.653152 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p"] Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.729190 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0449d2d9-ddcc-4eaa-84b1-9095448105f5-run-httpd\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.729246 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0449d2d9-ddcc-4eaa-84b1-9095448105f5-log-httpd\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.729280 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0449d2d9-ddcc-4eaa-84b1-9095448105f5-config-data\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.729410 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntmxx\" (UniqueName: \"kubernetes.io/projected/0449d2d9-ddcc-4eaa-84b1-9095448105f5-kube-api-access-ntmxx\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.729572 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0449d2d9-ddcc-4eaa-84b1-9095448105f5-etc-swift\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.830713 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntmxx\" (UniqueName: \"kubernetes.io/projected/0449d2d9-ddcc-4eaa-84b1-9095448105f5-kube-api-access-ntmxx\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.830820 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0449d2d9-ddcc-4eaa-84b1-9095448105f5-etc-swift\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.830876 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0449d2d9-ddcc-4eaa-84b1-9095448105f5-run-httpd\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.830903 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0449d2d9-ddcc-4eaa-84b1-9095448105f5-log-httpd\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.830922 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0449d2d9-ddcc-4eaa-84b1-9095448105f5-config-data\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.831369 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0449d2d9-ddcc-4eaa-84b1-9095448105f5-log-httpd\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.831594 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0449d2d9-ddcc-4eaa-84b1-9095448105f5-run-httpd\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.837960 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0449d2d9-ddcc-4eaa-84b1-9095448105f5-config-data\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.846716 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0449d2d9-ddcc-4eaa-84b1-9095448105f5-etc-swift\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.864553 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntmxx\" (UniqueName: \"kubernetes.io/projected/0449d2d9-ddcc-4eaa-84b1-9095448105f5-kube-api-access-ntmxx\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:55 crc kubenswrapper[4835]: I0201 07:48:55.018613 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:55 crc kubenswrapper[4835]: I0201 07:48:55.330150 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p"] Feb 01 07:48:55 crc kubenswrapper[4835]: W0201 07:48:55.346570 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0449d2d9_ddcc_4eaa_84b1_9095448105f5.slice/crio-149a7db6d3e3e367fac7d873cfee0becf2c8c9c52da7d468a7fef2e1cec7a233 WatchSource:0}: Error finding container 149a7db6d3e3e367fac7d873cfee0becf2c8c9c52da7d468a7fef2e1cec7a233: Status 404 returned error can't find the container with id 149a7db6d3e3e367fac7d873cfee0becf2c8c9c52da7d468a7fef2e1cec7a233 Feb 01 07:48:55 crc kubenswrapper[4835]: I0201 07:48:55.801658 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"1838599b5d9bc0829100b3d6f15b7c7c33c2ec97bcdc55704c4ebbde697b911e"} Feb 01 07:48:55 crc kubenswrapper[4835]: I0201 07:48:55.801757 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"397c73b00c04db01df3e8a36434377b8f8e589ca9c6353eeef20c5573cf758fc"} Feb 01 07:48:55 crc kubenswrapper[4835]: I0201 07:48:55.801778 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"149a7db6d3e3e367fac7d873cfee0becf2c8c9c52da7d468a7fef2e1cec7a233"} Feb 01 07:48:55 crc kubenswrapper[4835]: I0201 07:48:55.801838 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:55 crc kubenswrapper[4835]: I0201 07:48:55.829944 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podStartSLOduration=1.829913235 podStartE2EDuration="1.829913235s" podCreationTimestamp="2026-02-01 07:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:48:55.828232151 +0000 UTC m=+1608.948668605" watchObservedRunningTime="2026-02-01 07:48:55.829913235 +0000 UTC m=+1608.950349669" Feb 01 07:48:56 crc kubenswrapper[4835]: I0201 07:48:56.817659 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"1838599b5d9bc0829100b3d6f15b7c7c33c2ec97bcdc55704c4ebbde697b911e"} Feb 01 07:48:56 crc kubenswrapper[4835]: I0201 07:48:56.818067 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:56 crc kubenswrapper[4835]: I0201 07:48:56.818301 4835 scope.go:117] "RemoveContainer" containerID="1838599b5d9bc0829100b3d6f15b7c7c33c2ec97bcdc55704c4ebbde697b911e" Feb 01 07:48:56 crc kubenswrapper[4835]: I0201 07:48:56.817478 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="1838599b5d9bc0829100b3d6f15b7c7c33c2ec97bcdc55704c4ebbde697b911e" exitCode=1 Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.839113 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888"} Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.839826 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.849210 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xgrp2"] Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.850615 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.856174 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgrp2"] Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.896514 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-utilities\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.896573 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds6g4\" (UniqueName: \"kubernetes.io/projected/952a92f0-8bd4-4aa9-b437-af019f748380-kube-api-access-ds6g4\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.896620 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-catalog-content\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.997663 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds6g4\" (UniqueName: \"kubernetes.io/projected/952a92f0-8bd4-4aa9-b437-af019f748380-kube-api-access-ds6g4\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.998035 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-catalog-content\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.998311 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-utilities\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.998582 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-catalog-content\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.999379 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-utilities\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.040890 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds6g4\" (UniqueName: \"kubernetes.io/projected/952a92f0-8bd4-4aa9-b437-af019f748380-kube-api-access-ds6g4\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.172678 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.658526 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgrp2"] Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.850489 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888" exitCode=1 Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.850559 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888"} Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.850590 4835 scope.go:117] "RemoveContainer" containerID="1838599b5d9bc0829100b3d6f15b7c7c33c2ec97bcdc55704c4ebbde697b911e" Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.851502 4835 scope.go:117] "RemoveContainer" containerID="5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888" Feb 01 07:48:58 crc kubenswrapper[4835]: E0201 07:48:58.851994 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.854355 4835 generic.go:334] "Generic (PLEG): container finished" podID="952a92f0-8bd4-4aa9-b437-af019f748380" containerID="811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee" exitCode=0 Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.854416 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgrp2" event={"ID":"952a92f0-8bd4-4aa9-b437-af019f748380","Type":"ContainerDied","Data":"811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee"} Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.854491 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgrp2" event={"ID":"952a92f0-8bd4-4aa9-b437-af019f748380","Type":"ContainerStarted","Data":"531b752c0353bd0cf7d0d623b4ef2f05ab183ae8b42ef50855bcea2f7ac14cc4"} Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.567211 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:48:59 crc kubenswrapper[4835]: E0201 07:48:59.567980 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.722446 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.838845 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"1edd7394-0f8e-4271-8774-f228946e62f3\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.839030 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"1edd7394-0f8e-4271-8774-f228946e62f3\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.839065 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-lock\") pod \"1edd7394-0f8e-4271-8774-f228946e62f3\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.839098 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt6t9\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-kube-api-access-wt6t9\") pod \"1edd7394-0f8e-4271-8774-f228946e62f3\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.839126 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-cache\") pod \"1edd7394-0f8e-4271-8774-f228946e62f3\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.839827 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-lock" (OuterVolumeSpecName: "lock") pod "1edd7394-0f8e-4271-8774-f228946e62f3" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.840036 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-cache" (OuterVolumeSpecName: "cache") pod "1edd7394-0f8e-4271-8774-f228946e62f3" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.843537 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "swift") pod "1edd7394-0f8e-4271-8774-f228946e62f3" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.843863 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-kube-api-access-wt6t9" (OuterVolumeSpecName: "kube-api-access-wt6t9") pod "1edd7394-0f8e-4271-8774-f228946e62f3" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3"). InnerVolumeSpecName "kube-api-access-wt6t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.843920 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1edd7394-0f8e-4271-8774-f228946e62f3" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.870152 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b" exitCode=137 Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.870285 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.870308 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b"} Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.874851 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"965930581ebfe6a06bce16c42d9dbc0702e4b9210c5c9c9057f64d28fcd26803"} Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.874877 4835 scope.go:117] "RemoveContainer" containerID="24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.878880 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgrp2" event={"ID":"952a92f0-8bd4-4aa9-b437-af019f748380","Type":"ContainerStarted","Data":"6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997"} Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.890145 4835 scope.go:117] "RemoveContainer" containerID="5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888" Feb 01 07:48:59 crc kubenswrapper[4835]: E0201 07:48:59.890617 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.895108 4835 scope.go:117] "RemoveContainer" containerID="2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.941132 4835 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-lock\") on node \"crc\" DevicePath \"\"" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.941181 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt6t9\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-kube-api-access-wt6t9\") on node \"crc\" DevicePath \"\"" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.941199 4835 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-cache\") on node \"crc\" DevicePath \"\"" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.941244 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.941331 4835 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.942551 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.959132 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.963551 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.964861 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.968841 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.986706 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013086 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013390 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013413 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013441 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013450 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013466 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013474 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013485 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013494 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013504 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013512 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013526 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="swift-recon-cron" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013534 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="swift-recon-cron" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013547 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013557 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013569 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013579 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013588 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013597 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013608 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013616 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013629 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013637 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013646 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-server" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013654 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-server" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013666 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013674 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013686 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013694 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013707 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013715 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013727 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013735 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013747 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013757 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013767 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013775 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013788 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-reaper" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013797 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-reaper" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013807 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013814 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013828 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013836 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013847 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013855 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013865 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013873 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013883 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013892 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013908 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013917 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013928 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013936 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013950 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-server" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013958 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-server" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013974 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-server" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013999 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-server" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.014009 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014017 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.014027 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014035 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.014045 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="rsync" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014053 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="rsync" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.014067 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014075 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014223 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014250 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014260 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014270 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014282 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014293 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014304 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014314 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-server" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014328 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014340 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="swift-recon-cron" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014349 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014359 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014369 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-server" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014379 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014390 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014401 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-server" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014418 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014951 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014965 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014978 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-reaper" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014992 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015006 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015040 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015052 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015060 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015070 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015080 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015092 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015102 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015115 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015124 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015132 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015147 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015155 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="rsync" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.015326 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015337 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.015348 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015356 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.015372 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015380 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.015394 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015402 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.015414 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015443 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.015460 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015468 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.015483 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015490 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015828 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015849 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015859 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015879 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015889 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.016037 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.016046 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.016217 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.021510 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.024062 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.025612 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.029007 4835 scope.go:117] "RemoveContainer" containerID="c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.032790 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.045080 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.064638 4835 scope.go:117] "RemoveContainer" containerID="1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.091480 4835 scope.go:117] "RemoveContainer" containerID="115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.110466 4835 scope.go:117] "RemoveContainer" containerID="57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.126199 4835 scope.go:117] "RemoveContainer" containerID="e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.143168 4835 scope.go:117] "RemoveContainer" containerID="3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.146488 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt7d8\" (UniqueName: \"kubernetes.io/projected/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-kube-api-access-tt7d8\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.146534 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-etc-swift\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.146560 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-cache\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.146596 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-lock\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.146713 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.161927 4835 scope.go:117] "RemoveContainer" containerID="eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.194408 4835 scope.go:117] "RemoveContainer" containerID="c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.215043 4835 scope.go:117] "RemoveContainer" containerID="c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.232612 4835 scope.go:117] "RemoveContainer" containerID="abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.248304 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.248528 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") device mount path \"/mnt/openstack/pv10\"" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.248548 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt7d8\" (UniqueName: \"kubernetes.io/projected/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-kube-api-access-tt7d8\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.248591 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-etc-swift\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.248625 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-cache\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.248683 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-lock\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.249734 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-lock\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.249757 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-cache\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.251459 4835 scope.go:117] "RemoveContainer" containerID="24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.253418 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-etc-swift\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.256870 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea\": container with ID starting with 24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea not found: ID does not exist" containerID="24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.256933 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea"} err="failed to get container status \"24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea\": rpc error: code = NotFound desc = could not find container \"24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea\": container with ID starting with 24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.256970 4835 scope.go:117] "RemoveContainer" containerID="2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.257329 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6\": container with ID starting with 2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6 not found: ID does not exist" containerID="2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.257372 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6"} err="failed to get container status \"2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6\": rpc error: code = NotFound desc = could not find container \"2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6\": container with ID starting with 2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.257400 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.257699 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328\": container with ID starting with 8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328 not found: ID does not exist" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.257726 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328"} err="failed to get container status \"8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328\": rpc error: code = NotFound desc = could not find container \"8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328\": container with ID starting with 8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.257744 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.257978 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2\": container with ID starting with 8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2 not found: ID does not exist" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258007 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2"} err="failed to get container status \"8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2\": rpc error: code = NotFound desc = could not find container \"8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2\": container with ID starting with 8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258025 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.258256 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d\": container with ID starting with 258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d not found: ID does not exist" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258302 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d"} err="failed to get container status \"258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d\": rpc error: code = NotFound desc = could not find container \"258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d\": container with ID starting with 258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258324 4835 scope.go:117] "RemoveContainer" containerID="c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.258570 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b\": container with ID starting with c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b not found: ID does not exist" containerID="c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258594 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b"} err="failed to get container status \"c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b\": rpc error: code = NotFound desc = could not find container \"c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b\": container with ID starting with c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258611 4835 scope.go:117] "RemoveContainer" containerID="1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.258810 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876\": container with ID starting with 1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876 not found: ID does not exist" containerID="1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258836 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876"} err="failed to get container status \"1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876\": rpc error: code = NotFound desc = could not find container \"1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876\": container with ID starting with 1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258853 4835 scope.go:117] "RemoveContainer" containerID="115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.259047 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14\": container with ID starting with 115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14 not found: ID does not exist" containerID="115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.259072 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14"} err="failed to get container status \"115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14\": rpc error: code = NotFound desc = could not find container \"115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14\": container with ID starting with 115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.259089 4835 scope.go:117] "RemoveContainer" containerID="57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.259286 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c\": container with ID starting with 57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c not found: ID does not exist" containerID="57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.259313 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c"} err="failed to get container status \"57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c\": rpc error: code = NotFound desc = could not find container \"57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c\": container with ID starting with 57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.259329 4835 scope.go:117] "RemoveContainer" containerID="e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.259592 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4\": container with ID starting with e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4 not found: ID does not exist" containerID="e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.259641 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4"} err="failed to get container status \"e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4\": rpc error: code = NotFound desc = could not find container \"e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4\": container with ID starting with e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.259709 4835 scope.go:117] "RemoveContainer" containerID="3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.259975 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10\": container with ID starting with 3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10 not found: ID does not exist" containerID="3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260003 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10"} err="failed to get container status \"3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10\": rpc error: code = NotFound desc = could not find container \"3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10\": container with ID starting with 3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260020 4835 scope.go:117] "RemoveContainer" containerID="eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.260258 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a\": container with ID starting with eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a not found: ID does not exist" containerID="eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260285 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a"} err="failed to get container status \"eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a\": rpc error: code = NotFound desc = could not find container \"eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a\": container with ID starting with eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260306 4835 scope.go:117] "RemoveContainer" containerID="c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.260531 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a\": container with ID starting with c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a not found: ID does not exist" containerID="c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260556 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a"} err="failed to get container status \"c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a\": rpc error: code = NotFound desc = could not find container \"c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a\": container with ID starting with c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260574 4835 scope.go:117] "RemoveContainer" containerID="c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.260782 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407\": container with ID starting with c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407 not found: ID does not exist" containerID="c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260808 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407"} err="failed to get container status \"c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407\": rpc error: code = NotFound desc = could not find container \"c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407\": container with ID starting with c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260826 4835 scope.go:117] "RemoveContainer" containerID="abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.261016 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541\": container with ID starting with abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541 not found: ID does not exist" containerID="abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.261040 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541"} err="failed to get container status \"abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541\": rpc error: code = NotFound desc = could not find container \"abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541\": container with ID starting with abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.269350 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt7d8\" (UniqueName: \"kubernetes.io/projected/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-kube-api-access-tt7d8\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.269946 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.366400 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.652904 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:49:00 crc kubenswrapper[4835]: W0201 07:49:00.657217 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2e2f8e4_eb90_4d97_8796_8f5d196577ce.slice/crio-8d63f4213b4e575f9fa7f6636745f4b4d555d78213c56efc75136d9adc404202 WatchSource:0}: Error finding container 8d63f4213b4e575f9fa7f6636745f4b4d555d78213c56efc75136d9adc404202: Status 404 returned error can't find the container with id 8d63f4213b4e575f9fa7f6636745f4b4d555d78213c56efc75136d9adc404202 Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.907550 4835 generic.go:334] "Generic (PLEG): container finished" podID="952a92f0-8bd4-4aa9-b437-af019f748380" containerID="6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997" exitCode=0 Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.907680 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgrp2" event={"ID":"952a92f0-8bd4-4aa9-b437-af019f748380","Type":"ContainerDied","Data":"6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997"} Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.911458 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"39cc2216f3110369d2fdb141e31cb3f0931f6db70a6aab1d853e606d8dca7dc4"} Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.911490 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"8d63f4213b4e575f9fa7f6636745f4b4d555d78213c56efc75136d9adc404202"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.019169 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.020760 4835 scope.go:117] "RemoveContainer" containerID="5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888" Feb 01 07:49:01 crc kubenswrapper[4835]: E0201 07:49:01.021195 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.021900 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.022843 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.585674 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" path="/var/lib/kubelet/pods/1edd7394-0f8e-4271-8774-f228946e62f3/volumes" Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.940987 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgrp2" event={"ID":"952a92f0-8bd4-4aa9-b437-af019f748380","Type":"ContainerStarted","Data":"437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.946569 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="e4501caf2712efde1072e65cdc2495e22511b6ca50d0de32e4362eb3116d1f13" exitCode=1 Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.946599 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"41f50b96136eaae91636269f8bfa47862af4f96b115163aaffe156988450d4a4"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.946613 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"c542416827eeef621bc9aca8e48a29338e6bd9c000c191055db8f6ea89995b19"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.946622 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"6d543491d8f0729ab57b25dc009a5e53210189f8867bea16936e1ba49aa87463"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.946631 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"9dfb9fafa9f7b6aca2e897462158b2e6918ac0c51e08838a2af0060d19e450ec"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.946639 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"50187ec0044aa65d0dfc04bb190e11910e7dd6df21a714b706ced9753431b60b"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.946647 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"e4501caf2712efde1072e65cdc2495e22511b6ca50d0de32e4362eb3116d1f13"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.961397 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xgrp2" podStartSLOduration=2.524816796 podStartE2EDuration="4.961376011s" podCreationTimestamp="2026-02-01 07:48:57 +0000 UTC" firstStartedPulling="2026-02-01 07:48:58.865298822 +0000 UTC m=+1611.985735266" lastFinishedPulling="2026-02-01 07:49:01.301858047 +0000 UTC m=+1614.422294481" observedRunningTime="2026-02-01 07:49:01.956102542 +0000 UTC m=+1615.076538976" watchObservedRunningTime="2026-02-01 07:49:01.961376011 +0000 UTC m=+1615.081812455" Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.956630 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="c542416827eeef621bc9aca8e48a29338e6bd9c000c191055db8f6ea89995b19" exitCode=1 Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.957362 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"c542416827eeef621bc9aca8e48a29338e6bd9c000c191055db8f6ea89995b19"} Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.957392 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"b5b5df0939e8da11d020fa69c912de29cb26187bba91448c0e8b628b35f0b613"} Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.957403 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"fe6bd8e84d6ed5717736c29de8d74a04026b73df093d00dea9d9e4f338cae07c"} Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.957416 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"dff20be10edb56e0bf7c65fae7a9a4a50e30929b326c3cc3407aee5e7fed7c13"} Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.957444 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"bb9e30181dc2e29c7cbb808fa255eca4e29643c8d5a1d41ffb4eedef8cfda794"} Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.957453 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"024d1559d50fedb11ec83f9a36946428ba56b4e2ee849e2174dde39b0f4b6245"} Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.957461 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"f37851d985a30497d9ff14d46c11d28293ba0304df3383819707502eddde0548"} Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.976843 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="b5b5df0939e8da11d020fa69c912de29cb26187bba91448c0e8b628b35f0b613" exitCode=1 Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.977298 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="e7803d57ef9f8ca7ab7e274227ef6c8f5664fb9604460e89a7dccb307d6d3835" exitCode=1 Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.976939 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"b5b5df0939e8da11d020fa69c912de29cb26187bba91448c0e8b628b35f0b613"} Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.977355 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"e7803d57ef9f8ca7ab7e274227ef6c8f5664fb9604460e89a7dccb307d6d3835"} Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.977381 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"3ee15002522b3bad1068c43364071ba2181fc2a29d8e762e9687e95c5a3b7e1b"} Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.977403 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"d66db67a8b5851acb3426fe89016568c6df1b70535718d50bab43208a03fa504"} Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.978219 4835 scope.go:117] "RemoveContainer" containerID="e4501caf2712efde1072e65cdc2495e22511b6ca50d0de32e4362eb3116d1f13" Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.978368 4835 scope.go:117] "RemoveContainer" containerID="c542416827eeef621bc9aca8e48a29338e6bd9c000c191055db8f6ea89995b19" Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.978620 4835 scope.go:117] "RemoveContainer" containerID="b5b5df0939e8da11d020fa69c912de29cb26187bba91448c0e8b628b35f0b613" Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.978706 4835 scope.go:117] "RemoveContainer" containerID="e7803d57ef9f8ca7ab7e274227ef6c8f5664fb9604460e89a7dccb307d6d3835" Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.035869 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.992581 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="3675c87d6622f01fc61d145aa6b1e53ab778afbb1063428fc754c891679b40f6" exitCode=1 Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.992913 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="014b284010003166efbc92474316abd90e420a8635aafb2c660fb04b1cfed454" exitCode=1 Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.992645 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"fbf3c4e0172c9018417d341c8556f14bc2eaca0c5d6aaafefebf684016adda77"} Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.992941 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"325fc3a889bbf20a4c90aad8f0f84caaf16c7870750328eef2f96dc599b7d3ea"} Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.992951 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"3675c87d6622f01fc61d145aa6b1e53ab778afbb1063428fc754c891679b40f6"} Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.992961 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"014b284010003166efbc92474316abd90e420a8635aafb2c660fb04b1cfed454"} Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.992977 4835 scope.go:117] "RemoveContainer" containerID="c542416827eeef621bc9aca8e48a29338e6bd9c000c191055db8f6ea89995b19" Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.994491 4835 scope.go:117] "RemoveContainer" containerID="014b284010003166efbc92474316abd90e420a8635aafb2c660fb04b1cfed454" Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.994785 4835 scope.go:117] "RemoveContainer" containerID="3675c87d6622f01fc61d145aa6b1e53ab778afbb1063428fc754c891679b40f6" Feb 01 07:49:04 crc kubenswrapper[4835]: E0201 07:49:04.996290 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:05 crc kubenswrapper[4835]: I0201 07:49:05.020782 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:05 crc kubenswrapper[4835]: I0201 07:49:05.060194 4835 scope.go:117] "RemoveContainer" containerID="e4501caf2712efde1072e65cdc2495e22511b6ca50d0de32e4362eb3116d1f13" Feb 01 07:49:05 crc kubenswrapper[4835]: I0201 07:49:05.567544 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:05 crc kubenswrapper[4835]: I0201 07:49:05.567594 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:49:05 crc kubenswrapper[4835]: E0201 07:49:05.567954 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.016831 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="fbf3c4e0172c9018417d341c8556f14bc2eaca0c5d6aaafefebf684016adda77" exitCode=1 Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.016885 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="325fc3a889bbf20a4c90aad8f0f84caaf16c7870750328eef2f96dc599b7d3ea" exitCode=1 Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.016937 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"fbf3c4e0172c9018417d341c8556f14bc2eaca0c5d6aaafefebf684016adda77"} Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.017008 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"325fc3a889bbf20a4c90aad8f0f84caaf16c7870750328eef2f96dc599b7d3ea"} Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.017047 4835 scope.go:117] "RemoveContainer" containerID="e7803d57ef9f8ca7ab7e274227ef6c8f5664fb9604460e89a7dccb307d6d3835" Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.017720 4835 scope.go:117] "RemoveContainer" containerID="014b284010003166efbc92474316abd90e420a8635aafb2c660fb04b1cfed454" Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.017812 4835 scope.go:117] "RemoveContainer" containerID="3675c87d6622f01fc61d145aa6b1e53ab778afbb1063428fc754c891679b40f6" Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.017926 4835 scope.go:117] "RemoveContainer" containerID="325fc3a889bbf20a4c90aad8f0f84caaf16c7870750328eef2f96dc599b7d3ea" Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.017984 4835 scope.go:117] "RemoveContainer" containerID="fbf3c4e0172c9018417d341c8556f14bc2eaca0c5d6aaafefebf684016adda77" Feb 01 07:49:06 crc kubenswrapper[4835]: E0201 07:49:06.018323 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.078389 4835 scope.go:117] "RemoveContainer" containerID="b5b5df0939e8da11d020fa69c912de29cb26187bba91448c0e8b628b35f0b613" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.022350 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.022833 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.023613 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"397c73b00c04db01df3e8a36434377b8f8e589ca9c6353eeef20c5573cf758fc"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.023640 4835 scope.go:117] "RemoveContainer" containerID="5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.023666 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://397c73b00c04db01df3e8a36434377b8f8e589ca9c6353eeef20c5573cf758fc" gracePeriod=30 Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.027793 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.043207 4835 scope.go:117] "RemoveContainer" containerID="014b284010003166efbc92474316abd90e420a8635aafb2c660fb04b1cfed454" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.043302 4835 scope.go:117] "RemoveContainer" containerID="3675c87d6622f01fc61d145aa6b1e53ab778afbb1063428fc754c891679b40f6" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.043487 4835 scope.go:117] "RemoveContainer" containerID="325fc3a889bbf20a4c90aad8f0f84caaf16c7870750328eef2f96dc599b7d3ea" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.043538 4835 scope.go:117] "RemoveContainer" containerID="fbf3c4e0172c9018417d341c8556f14bc2eaca0c5d6aaafefebf684016adda77" Feb 01 07:49:07 crc kubenswrapper[4835]: E0201 07:49:07.043872 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:07 crc kubenswrapper[4835]: E0201 07:49:07.324912 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.051691 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="397c73b00c04db01df3e8a36434377b8f8e589ca9c6353eeef20c5573cf758fc" exitCode=0 Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.051740 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"397c73b00c04db01df3e8a36434377b8f8e589ca9c6353eeef20c5573cf758fc"} Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.052060 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"b8326a6e6498baf2c3c0e58ceebcaffe1160b44529dec51b48c761e8af76de68"} Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.052284 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.052639 4835 scope.go:117] "RemoveContainer" containerID="5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888" Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.172862 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.172942 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.234271 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:49:09 crc kubenswrapper[4835]: I0201 07:49:09.076548 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4"} Feb 01 07:49:09 crc kubenswrapper[4835]: I0201 07:49:09.076715 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:09 crc kubenswrapper[4835]: I0201 07:49:09.141392 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:49:09 crc kubenswrapper[4835]: I0201 07:49:09.204759 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xgrp2"] Feb 01 07:49:10 crc kubenswrapper[4835]: I0201 07:49:10.085990 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" exitCode=1 Feb 01 07:49:10 crc kubenswrapper[4835]: I0201 07:49:10.086093 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4"} Feb 01 07:49:10 crc kubenswrapper[4835]: I0201 07:49:10.086182 4835 scope.go:117] "RemoveContainer" containerID="5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888" Feb 01 07:49:10 crc kubenswrapper[4835]: I0201 07:49:10.087139 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:10 crc kubenswrapper[4835]: E0201 07:49:10.087551 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.103046 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xgrp2" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="registry-server" containerID="cri-o://437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9" gracePeriod=2 Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.104271 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:11 crc kubenswrapper[4835]: E0201 07:49:11.104522 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.568612 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:49:11 crc kubenswrapper[4835]: E0201 07:49:11.569056 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.600962 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.763229 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds6g4\" (UniqueName: \"kubernetes.io/projected/952a92f0-8bd4-4aa9-b437-af019f748380-kube-api-access-ds6g4\") pod \"952a92f0-8bd4-4aa9-b437-af019f748380\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.763591 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-catalog-content\") pod \"952a92f0-8bd4-4aa9-b437-af019f748380\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.763809 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-utilities\") pod \"952a92f0-8bd4-4aa9-b437-af019f748380\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.764766 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-utilities" (OuterVolumeSpecName: "utilities") pod "952a92f0-8bd4-4aa9-b437-af019f748380" (UID: "952a92f0-8bd4-4aa9-b437-af019f748380"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.775704 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/952a92f0-8bd4-4aa9-b437-af019f748380-kube-api-access-ds6g4" (OuterVolumeSpecName: "kube-api-access-ds6g4") pod "952a92f0-8bd4-4aa9-b437-af019f748380" (UID: "952a92f0-8bd4-4aa9-b437-af019f748380"). InnerVolumeSpecName "kube-api-access-ds6g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.826840 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "952a92f0-8bd4-4aa9-b437-af019f748380" (UID: "952a92f0-8bd4-4aa9-b437-af019f748380"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.865755 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds6g4\" (UniqueName: \"kubernetes.io/projected/952a92f0-8bd4-4aa9-b437-af019f748380-kube-api-access-ds6g4\") on node \"crc\" DevicePath \"\"" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.865805 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.865824 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.113847 4835 generic.go:334] "Generic (PLEG): container finished" podID="952a92f0-8bd4-4aa9-b437-af019f748380" containerID="437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9" exitCode=0 Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.113904 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgrp2" event={"ID":"952a92f0-8bd4-4aa9-b437-af019f748380","Type":"ContainerDied","Data":"437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9"} Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.113938 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgrp2" event={"ID":"952a92f0-8bd4-4aa9-b437-af019f748380","Type":"ContainerDied","Data":"531b752c0353bd0cf7d0d623b4ef2f05ab183ae8b42ef50855bcea2f7ac14cc4"} Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.113962 4835 scope.go:117] "RemoveContainer" containerID="437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.114093 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.136900 4835 scope.go:117] "RemoveContainer" containerID="6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.163316 4835 scope.go:117] "RemoveContainer" containerID="811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.222739 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xgrp2"] Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.237242 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xgrp2"] Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.238026 4835 scope.go:117] "RemoveContainer" containerID="437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9" Feb 01 07:49:12 crc kubenswrapper[4835]: E0201 07:49:12.238466 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9\": container with ID starting with 437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9 not found: ID does not exist" containerID="437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.238492 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9"} err="failed to get container status \"437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9\": rpc error: code = NotFound desc = could not find container \"437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9\": container with ID starting with 437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9 not found: ID does not exist" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.238513 4835 scope.go:117] "RemoveContainer" containerID="6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997" Feb 01 07:49:12 crc kubenswrapper[4835]: E0201 07:49:12.238864 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997\": container with ID starting with 6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997 not found: ID does not exist" containerID="6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.238941 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997"} err="failed to get container status \"6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997\": rpc error: code = NotFound desc = could not find container \"6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997\": container with ID starting with 6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997 not found: ID does not exist" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.238997 4835 scope.go:117] "RemoveContainer" containerID="811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee" Feb 01 07:49:12 crc kubenswrapper[4835]: E0201 07:49:12.239452 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee\": container with ID starting with 811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee not found: ID does not exist" containerID="811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.239522 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee"} err="failed to get container status \"811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee\": rpc error: code = NotFound desc = could not find container \"811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee\": container with ID starting with 811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee not found: ID does not exist" Feb 01 07:49:13 crc kubenswrapper[4835]: I0201 07:49:13.018903 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:13 crc kubenswrapper[4835]: I0201 07:49:13.019952 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:13 crc kubenswrapper[4835]: E0201 07:49:13.020485 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:13 crc kubenswrapper[4835]: I0201 07:49:13.022602 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:13 crc kubenswrapper[4835]: I0201 07:49:13.022640 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:13 crc kubenswrapper[4835]: I0201 07:49:13.582577 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" path="/var/lib/kubelet/pods/952a92f0-8bd4-4aa9-b437-af019f748380/volumes" Feb 01 07:49:15 crc kubenswrapper[4835]: I0201 07:49:15.021776 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:16 crc kubenswrapper[4835]: I0201 07:49:16.023142 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.021396 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.021820 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.022824 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"b8326a6e6498baf2c3c0e58ceebcaffe1160b44529dec51b48c761e8af76de68"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.022854 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.022901 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://b8326a6e6498baf2c3c0e58ceebcaffe1160b44529dec51b48c761e8af76de68" gracePeriod=30 Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.024125 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.194251 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="b8326a6e6498baf2c3c0e58ceebcaffe1160b44529dec51b48c761e8af76de68" exitCode=0 Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.194319 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"b8326a6e6498baf2c3c0e58ceebcaffe1160b44529dec51b48c761e8af76de68"} Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.194400 4835 scope.go:117] "RemoveContainer" containerID="397c73b00c04db01df3e8a36434377b8f8e589ca9c6353eeef20c5573cf758fc" Feb 01 07:49:19 crc kubenswrapper[4835]: E0201 07:49:19.900122 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.021362 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.206111 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"dfcbc8158540e8b14b8f031f0ed70eccc3b8694b265776d8471950ed2ff440a3"} Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.206549 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.206900 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:20 crc kubenswrapper[4835]: E0201 07:49:20.207205 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.566783 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.566827 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.567052 4835 scope.go:117] "RemoveContainer" containerID="014b284010003166efbc92474316abd90e420a8635aafb2c660fb04b1cfed454" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.567125 4835 scope.go:117] "RemoveContainer" containerID="3675c87d6622f01fc61d145aa6b1e53ab778afbb1063428fc754c891679b40f6" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.567211 4835 scope.go:117] "RemoveContainer" containerID="325fc3a889bbf20a4c90aad8f0f84caaf16c7870750328eef2f96dc599b7d3ea" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.567252 4835 scope.go:117] "RemoveContainer" containerID="fbf3c4e0172c9018417d341c8556f14bc2eaca0c5d6aaafefebf684016adda77" Feb 01 07:49:20 crc kubenswrapper[4835]: E0201 07:49:20.764704 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.218085 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44"} Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.218757 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.219218 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:21 crc kubenswrapper[4835]: E0201 07:49:21.219577 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.251134 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="fff9c47d9f9751dda6b5a7766119279bc23c2f2edc3650a927e4a08bcbc7e47a" exitCode=1 Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.251268 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8"} Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.251321 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b"} Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.251344 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"fff9c47d9f9751dda6b5a7766119279bc23c2f2edc3650a927e4a08bcbc7e47a"} Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.251400 4835 scope.go:117] "RemoveContainer" containerID="014b284010003166efbc92474316abd90e420a8635aafb2c660fb04b1cfed454" Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.251833 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:21 crc kubenswrapper[4835]: E0201 07:49:21.252032 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.270590 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8" exitCode=1 Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.270652 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b" exitCode=1 Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.270682 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="0d957b4b4419b3ad0555e1431ee8a63c3430d586ceef39de00ff73272ceae03e" exitCode=1 Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.270784 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8"} Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.270831 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b"} Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.270856 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"0d957b4b4419b3ad0555e1431ee8a63c3430d586ceef39de00ff73272ceae03e"} Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.270899 4835 scope.go:117] "RemoveContainer" containerID="325fc3a889bbf20a4c90aad8f0f84caaf16c7870750328eef2f96dc599b7d3ea" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.272293 4835 scope.go:117] "RemoveContainer" containerID="fff9c47d9f9751dda6b5a7766119279bc23c2f2edc3650a927e4a08bcbc7e47a" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.272519 4835 scope.go:117] "RemoveContainer" containerID="40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.272804 4835 scope.go:117] "RemoveContainer" containerID="4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.272931 4835 scope.go:117] "RemoveContainer" containerID="0d957b4b4419b3ad0555e1431ee8a63c3430d586ceef39de00ff73272ceae03e" Feb 01 07:49:22 crc kubenswrapper[4835]: E0201 07:49:22.273888 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.281793 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" exitCode=1 Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.281838 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44"} Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.282702 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.282737 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:49:22 crc kubenswrapper[4835]: E0201 07:49:22.283121 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.314532 4835 scope.go:117] "RemoveContainer" containerID="3675c87d6622f01fc61d145aa6b1e53ab778afbb1063428fc754c891679b40f6" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.367053 4835 scope.go:117] "RemoveContainer" containerID="fbf3c4e0172c9018417d341c8556f14bc2eaca0c5d6aaafefebf684016adda77" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.404549 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:49:23 crc kubenswrapper[4835]: I0201 07:49:23.307206 4835 scope.go:117] "RemoveContainer" containerID="fff9c47d9f9751dda6b5a7766119279bc23c2f2edc3650a927e4a08bcbc7e47a" Feb 01 07:49:23 crc kubenswrapper[4835]: I0201 07:49:23.307726 4835 scope.go:117] "RemoveContainer" containerID="40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b" Feb 01 07:49:23 crc kubenswrapper[4835]: I0201 07:49:23.307903 4835 scope.go:117] "RemoveContainer" containerID="4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8" Feb 01 07:49:23 crc kubenswrapper[4835]: I0201 07:49:23.307969 4835 scope.go:117] "RemoveContainer" containerID="0d957b4b4419b3ad0555e1431ee8a63c3430d586ceef39de00ff73272ceae03e" Feb 01 07:49:23 crc kubenswrapper[4835]: E0201 07:49:23.308487 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:23 crc kubenswrapper[4835]: I0201 07:49:23.310759 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:23 crc kubenswrapper[4835]: I0201 07:49:23.310828 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:49:23 crc kubenswrapper[4835]: E0201 07:49:23.311202 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:23 crc kubenswrapper[4835]: I0201 07:49:23.566970 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:49:23 crc kubenswrapper[4835]: E0201 07:49:23.567358 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:49:24 crc kubenswrapper[4835]: I0201 07:49:24.535828 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:49:24 crc kubenswrapper[4835]: I0201 07:49:24.536882 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:24 crc kubenswrapper[4835]: I0201 07:49:24.536905 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:49:24 crc kubenswrapper[4835]: E0201 07:49:24.537449 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:25 crc kubenswrapper[4835]: I0201 07:49:25.023381 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:25 crc kubenswrapper[4835]: I0201 07:49:25.023483 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:28 crc kubenswrapper[4835]: I0201 07:49:28.021572 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.860476 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xtg6f"] Feb 01 07:49:29 crc kubenswrapper[4835]: E0201 07:49:29.860938 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="extract-utilities" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.860952 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="extract-utilities" Feb 01 07:49:29 crc kubenswrapper[4835]: E0201 07:49:29.860963 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="registry-server" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.860969 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="registry-server" Feb 01 07:49:29 crc kubenswrapper[4835]: E0201 07:49:29.860982 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="extract-content" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.860987 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="extract-content" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.861124 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="registry-server" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.866537 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.870262 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtg6f"] Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.979390 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-catalog-content\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.979467 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-utilities\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.979497 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbqn5\" (UniqueName: \"kubernetes.io/projected/124384c0-3e99-4689-bccb-5f0d29df89ee-kube-api-access-tbqn5\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.023267 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.081376 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-catalog-content\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.081475 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-utilities\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.081512 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbqn5\" (UniqueName: \"kubernetes.io/projected/124384c0-3e99-4689-bccb-5f0d29df89ee-kube-api-access-tbqn5\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.082007 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-utilities\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.082007 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-catalog-content\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.113080 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbqn5\" (UniqueName: \"kubernetes.io/projected/124384c0-3e99-4689-bccb-5f0d29df89ee-kube-api-access-tbqn5\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.237134 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.698135 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtg6f"] Feb 01 07:49:30 crc kubenswrapper[4835]: W0201 07:49:30.699060 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod124384c0_3e99_4689_bccb_5f0d29df89ee.slice/crio-7d22d53e511d179e4981719f8ecee51bc2645b3c446221ec04b613a6ca27ca6b WatchSource:0}: Error finding container 7d22d53e511d179e4981719f8ecee51bc2645b3c446221ec04b613a6ca27ca6b: Status 404 returned error can't find the container with id 7d22d53e511d179e4981719f8ecee51bc2645b3c446221ec04b613a6ca27ca6b Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.027275 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.027612 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.028343 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"dfcbc8158540e8b14b8f031f0ed70eccc3b8694b265776d8471950ed2ff440a3"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.028362 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.028389 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://dfcbc8158540e8b14b8f031f0ed70eccc3b8694b265776d8471950ed2ff440a3" gracePeriod=30 Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.032651 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.381580 4835 generic.go:334] "Generic (PLEG): container finished" podID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerID="7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca" exitCode=0 Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.381675 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtg6f" event={"ID":"124384c0-3e99-4689-bccb-5f0d29df89ee","Type":"ContainerDied","Data":"7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca"} Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.381891 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtg6f" event={"ID":"124384c0-3e99-4689-bccb-5f0d29df89ee","Type":"ContainerStarted","Data":"7d22d53e511d179e4981719f8ecee51bc2645b3c446221ec04b613a6ca27ca6b"} Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.386252 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="dfcbc8158540e8b14b8f031f0ed70eccc3b8694b265776d8471950ed2ff440a3" exitCode=0 Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.386279 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"dfcbc8158540e8b14b8f031f0ed70eccc3b8694b265776d8471950ed2ff440a3"} Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.386320 4835 scope.go:117] "RemoveContainer" containerID="b8326a6e6498baf2c3c0e58ceebcaffe1160b44529dec51b48c761e8af76de68" Feb 01 07:49:32 crc kubenswrapper[4835]: I0201 07:49:32.395292 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtg6f" event={"ID":"124384c0-3e99-4689-bccb-5f0d29df89ee","Type":"ContainerStarted","Data":"6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b"} Feb 01 07:49:32 crc kubenswrapper[4835]: I0201 07:49:32.405062 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42"} Feb 01 07:49:32 crc kubenswrapper[4835]: I0201 07:49:32.405118 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9"} Feb 01 07:49:32 crc kubenswrapper[4835]: I0201 07:49:32.405400 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:32 crc kubenswrapper[4835]: I0201 07:49:32.407204 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.432593 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" exitCode=1 Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.432800 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42"} Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.433229 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.433272 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.433395 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:49:33 crc kubenswrapper[4835]: E0201 07:49:33.433891 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.449501 4835 generic.go:334] "Generic (PLEG): container finished" podID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerID="6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b" exitCode=0 Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.449564 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtg6f" event={"ID":"124384c0-3e99-4689-bccb-5f0d29df89ee","Type":"ContainerDied","Data":"6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b"} Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.568298 4835 scope.go:117] "RemoveContainer" containerID="fff9c47d9f9751dda6b5a7766119279bc23c2f2edc3650a927e4a08bcbc7e47a" Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.568394 4835 scope.go:117] "RemoveContainer" containerID="40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b" Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.568562 4835 scope.go:117] "RemoveContainer" containerID="4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8" Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.568609 4835 scope.go:117] "RemoveContainer" containerID="0d957b4b4419b3ad0555e1431ee8a63c3430d586ceef39de00ff73272ceae03e" Feb 01 07:49:33 crc kubenswrapper[4835]: E0201 07:49:33.568941 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:34 crc kubenswrapper[4835]: I0201 07:49:34.018915 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:34 crc kubenswrapper[4835]: I0201 07:49:34.459269 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:49:34 crc kubenswrapper[4835]: E0201 07:49:34.459978 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:34 crc kubenswrapper[4835]: I0201 07:49:34.460991 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtg6f" event={"ID":"124384c0-3e99-4689-bccb-5f0d29df89ee","Type":"ContainerStarted","Data":"b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974"} Feb 01 07:49:34 crc kubenswrapper[4835]: I0201 07:49:34.498905 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xtg6f" podStartSLOduration=2.914828404 podStartE2EDuration="5.498887575s" podCreationTimestamp="2026-02-01 07:49:29 +0000 UTC" firstStartedPulling="2026-02-01 07:49:31.382931709 +0000 UTC m=+1644.503368143" lastFinishedPulling="2026-02-01 07:49:33.96699086 +0000 UTC m=+1647.087427314" observedRunningTime="2026-02-01 07:49:34.491206405 +0000 UTC m=+1647.611642839" watchObservedRunningTime="2026-02-01 07:49:34.498887575 +0000 UTC m=+1647.619324009" Feb 01 07:49:35 crc kubenswrapper[4835]: I0201 07:49:35.470154 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:49:35 crc kubenswrapper[4835]: E0201 07:49:35.470611 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:35 crc kubenswrapper[4835]: I0201 07:49:35.566700 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:49:35 crc kubenswrapper[4835]: E0201 07:49:35.566941 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:49:36 crc kubenswrapper[4835]: I0201 07:49:36.566922 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:36 crc kubenswrapper[4835]: I0201 07:49:36.567281 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:49:36 crc kubenswrapper[4835]: E0201 07:49:36.567511 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:37 crc kubenswrapper[4835]: I0201 07:49:37.022947 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:40 crc kubenswrapper[4835]: I0201 07:49:40.020761 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:40 crc kubenswrapper[4835]: I0201 07:49:40.021147 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:40 crc kubenswrapper[4835]: I0201 07:49:40.237637 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:40 crc kubenswrapper[4835]: I0201 07:49:40.237739 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:41 crc kubenswrapper[4835]: I0201 07:49:41.299788 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xtg6f" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="registry-server" probeResult="failure" output=< Feb 01 07:49:41 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 01 07:49:41 crc kubenswrapper[4835]: > Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.021439 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.021565 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.022547 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.022607 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.022658 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9" gracePeriod=30 Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.024979 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.553533 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9" exitCode=0 Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.553635 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9"} Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.554209 4835 scope.go:117] "RemoveContainer" containerID="dfcbc8158540e8b14b8f031f0ed70eccc3b8694b265776d8471950ed2ff440a3" Feb 01 07:49:43 crc kubenswrapper[4835]: E0201 07:49:43.655587 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:44 crc kubenswrapper[4835]: I0201 07:49:44.566737 4835 scope.go:117] "RemoveContainer" containerID="84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9" Feb 01 07:49:44 crc kubenswrapper[4835]: I0201 07:49:44.567533 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:49:44 crc kubenswrapper[4835]: E0201 07:49:44.567890 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:45 crc kubenswrapper[4835]: I0201 07:49:45.567459 4835 scope.go:117] "RemoveContainer" containerID="fff9c47d9f9751dda6b5a7766119279bc23c2f2edc3650a927e4a08bcbc7e47a" Feb 01 07:49:45 crc kubenswrapper[4835]: I0201 07:49:45.569527 4835 scope.go:117] "RemoveContainer" containerID="40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b" Feb 01 07:49:45 crc kubenswrapper[4835]: I0201 07:49:45.569878 4835 scope.go:117] "RemoveContainer" containerID="4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8" Feb 01 07:49:45 crc kubenswrapper[4835]: I0201 07:49:45.570152 4835 scope.go:117] "RemoveContainer" containerID="0d957b4b4419b3ad0555e1431ee8a63c3430d586ceef39de00ff73272ceae03e" Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.599902 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" exitCode=1 Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.600217 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" exitCode=1 Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.599974 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd"} Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.600253 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd"} Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.600267 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3"} Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.600279 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa"} Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.600295 4835 scope.go:117] "RemoveContainer" containerID="40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b" Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.600940 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.600999 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:49:46 crc kubenswrapper[4835]: E0201 07:49:46.601315 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.676910 4835 scope.go:117] "RemoveContainer" containerID="fff9c47d9f9751dda6b5a7766119279bc23c2f2edc3650a927e4a08bcbc7e47a" Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.622898 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" exitCode=1 Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.622951 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" exitCode=1 Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.622988 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd"} Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.623087 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd"} Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.623131 4835 scope.go:117] "RemoveContainer" containerID="0d957b4b4419b3ad0555e1431ee8a63c3430d586ceef39de00ff73272ceae03e" Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.624485 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.624654 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.624922 4835 scope.go:117] "RemoveContainer" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.625059 4835 scope.go:117] "RemoveContainer" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" Feb 01 07:49:47 crc kubenswrapper[4835]: E0201 07:49:47.625902 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.689470 4835 scope.go:117] "RemoveContainer" containerID="4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8" Feb 01 07:49:49 crc kubenswrapper[4835]: I0201 07:49:49.566495 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:49:49 crc kubenswrapper[4835]: E0201 07:49:49.567017 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:49:50 crc kubenswrapper[4835]: I0201 07:49:50.290420 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:50 crc kubenswrapper[4835]: I0201 07:49:50.340468 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:50 crc kubenswrapper[4835]: I0201 07:49:50.539774 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtg6f"] Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.567916 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.567971 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:49:51 crc kubenswrapper[4835]: E0201 07:49:51.568377 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.680959 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="fe6bd8e84d6ed5717736c29de8d74a04026b73df093d00dea9d9e4f338cae07c" exitCode=1 Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.681025 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"fe6bd8e84d6ed5717736c29de8d74a04026b73df093d00dea9d9e4f338cae07c"} Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.681649 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xtg6f" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="registry-server" containerID="cri-o://b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974" gracePeriod=2 Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.682326 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.682546 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.682702 4835 scope.go:117] "RemoveContainer" containerID="fe6bd8e84d6ed5717736c29de8d74a04026b73df093d00dea9d9e4f338cae07c" Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.682734 4835 scope.go:117] "RemoveContainer" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.682831 4835 scope.go:117] "RemoveContainer" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" Feb 01 07:49:51 crc kubenswrapper[4835]: E0201 07:49:51.976230 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.163430 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.269010 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-catalog-content\") pod \"124384c0-3e99-4689-bccb-5f0d29df89ee\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.269111 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbqn5\" (UniqueName: \"kubernetes.io/projected/124384c0-3e99-4689-bccb-5f0d29df89ee-kube-api-access-tbqn5\") pod \"124384c0-3e99-4689-bccb-5f0d29df89ee\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.269237 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-utilities\") pod \"124384c0-3e99-4689-bccb-5f0d29df89ee\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.271695 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-utilities" (OuterVolumeSpecName: "utilities") pod "124384c0-3e99-4689-bccb-5f0d29df89ee" (UID: "124384c0-3e99-4689-bccb-5f0d29df89ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.275719 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124384c0-3e99-4689-bccb-5f0d29df89ee-kube-api-access-tbqn5" (OuterVolumeSpecName: "kube-api-access-tbqn5") pod "124384c0-3e99-4689-bccb-5f0d29df89ee" (UID: "124384c0-3e99-4689-bccb-5f0d29df89ee"). InnerVolumeSpecName "kube-api-access-tbqn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.371623 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbqn5\" (UniqueName: \"kubernetes.io/projected/124384c0-3e99-4689-bccb-5f0d29df89ee-kube-api-access-tbqn5\") on node \"crc\" DevicePath \"\"" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.371668 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.416629 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "124384c0-3e99-4689-bccb-5f0d29df89ee" (UID: "124384c0-3e99-4689-bccb-5f0d29df89ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.472767 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.692459 4835 generic.go:334] "Generic (PLEG): container finished" podID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerID="b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974" exitCode=0 Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.692537 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.692566 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtg6f" event={"ID":"124384c0-3e99-4689-bccb-5f0d29df89ee","Type":"ContainerDied","Data":"b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974"} Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.692655 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtg6f" event={"ID":"124384c0-3e99-4689-bccb-5f0d29df89ee","Type":"ContainerDied","Data":"7d22d53e511d179e4981719f8ecee51bc2645b3c446221ec04b613a6ca27ca6b"} Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.692690 4835 scope.go:117] "RemoveContainer" containerID="b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.721666 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66"} Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.722399 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.722527 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.722681 4835 scope.go:117] "RemoveContainer" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.722857 4835 scope.go:117] "RemoveContainer" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" Feb 01 07:49:52 crc kubenswrapper[4835]: E0201 07:49:52.723760 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.740583 4835 scope.go:117] "RemoveContainer" containerID="6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.751574 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtg6f"] Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.761563 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xtg6f"] Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.772266 4835 scope.go:117] "RemoveContainer" containerID="7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.809468 4835 scope.go:117] "RemoveContainer" containerID="b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974" Feb 01 07:49:52 crc kubenswrapper[4835]: E0201 07:49:52.810118 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974\": container with ID starting with b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974 not found: ID does not exist" containerID="b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.810189 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974"} err="failed to get container status \"b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974\": rpc error: code = NotFound desc = could not find container \"b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974\": container with ID starting with b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974 not found: ID does not exist" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.810229 4835 scope.go:117] "RemoveContainer" containerID="6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b" Feb 01 07:49:52 crc kubenswrapper[4835]: E0201 07:49:52.811810 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b\": container with ID starting with 6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b not found: ID does not exist" containerID="6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.812101 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b"} err="failed to get container status \"6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b\": rpc error: code = NotFound desc = could not find container \"6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b\": container with ID starting with 6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b not found: ID does not exist" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.812182 4835 scope.go:117] "RemoveContainer" containerID="7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca" Feb 01 07:49:52 crc kubenswrapper[4835]: E0201 07:49:52.812751 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca\": container with ID starting with 7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca not found: ID does not exist" containerID="7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.812800 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca"} err="failed to get container status \"7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca\": rpc error: code = NotFound desc = could not find container \"7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca\": container with ID starting with 7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca not found: ID does not exist" Feb 01 07:49:53 crc kubenswrapper[4835]: I0201 07:49:53.582753 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" path="/var/lib/kubelet/pods/124384c0-3e99-4689-bccb-5f0d29df89ee/volumes" Feb 01 07:49:56 crc kubenswrapper[4835]: I0201 07:49:56.568136 4835 scope.go:117] "RemoveContainer" containerID="84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9" Feb 01 07:49:56 crc kubenswrapper[4835]: I0201 07:49:56.568198 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:49:56 crc kubenswrapper[4835]: E0201 07:49:56.568625 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:02 crc kubenswrapper[4835]: I0201 07:50:02.567260 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:50:02 crc kubenswrapper[4835]: E0201 07:50:02.569526 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:50:03 crc kubenswrapper[4835]: I0201 07:50:03.567531 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:50:03 crc kubenswrapper[4835]: I0201 07:50:03.567899 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:50:03 crc kubenswrapper[4835]: I0201 07:50:03.567937 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:03 crc kubenswrapper[4835]: I0201 07:50:03.568088 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:50:03 crc kubenswrapper[4835]: I0201 07:50:03.568278 4835 scope.go:117] "RemoveContainer" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" Feb 01 07:50:03 crc kubenswrapper[4835]: I0201 07:50:03.568326 4835 scope.go:117] "RemoveContainer" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" Feb 01 07:50:03 crc kubenswrapper[4835]: E0201 07:50:03.568375 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:03 crc kubenswrapper[4835]: E0201 07:50:03.568750 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:08 crc kubenswrapper[4835]: I0201 07:50:08.571531 4835 scope.go:117] "RemoveContainer" containerID="84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9" Feb 01 07:50:08 crc kubenswrapper[4835]: I0201 07:50:08.571584 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:50:08 crc kubenswrapper[4835]: E0201 07:50:08.825959 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:08 crc kubenswrapper[4835]: I0201 07:50:08.883687 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074"} Feb 01 07:50:08 crc kubenswrapper[4835]: I0201 07:50:08.884031 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:50:08 crc kubenswrapper[4835]: I0201 07:50:08.884579 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:50:08 crc kubenswrapper[4835]: E0201 07:50:08.884891 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:09 crc kubenswrapper[4835]: I0201 07:50:09.895079 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:50:09 crc kubenswrapper[4835]: E0201 07:50:09.895407 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:13 crc kubenswrapper[4835]: I0201 07:50:13.023099 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:15 crc kubenswrapper[4835]: I0201 07:50:15.021886 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:15 crc kubenswrapper[4835]: I0201 07:50:15.568442 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:50:15 crc kubenswrapper[4835]: I0201 07:50:15.568917 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:15 crc kubenswrapper[4835]: E0201 07:50:15.569721 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:16 crc kubenswrapper[4835]: I0201 07:50:16.021079 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:16 crc kubenswrapper[4835]: I0201 07:50:16.567685 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:50:16 crc kubenswrapper[4835]: I0201 07:50:16.568235 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:50:16 crc kubenswrapper[4835]: I0201 07:50:16.568353 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:50:16 crc kubenswrapper[4835]: I0201 07:50:16.568591 4835 scope.go:117] "RemoveContainer" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" Feb 01 07:50:16 crc kubenswrapper[4835]: E0201 07:50:16.568642 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:50:16 crc kubenswrapper[4835]: I0201 07:50:16.568662 4835 scope.go:117] "RemoveContainer" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" Feb 01 07:50:16 crc kubenswrapper[4835]: E0201 07:50:16.569121 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:19 crc kubenswrapper[4835]: I0201 07:50:19.021599 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:19 crc kubenswrapper[4835]: I0201 07:50:19.021713 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:50:19 crc kubenswrapper[4835]: I0201 07:50:19.022714 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:50:19 crc kubenswrapper[4835]: I0201 07:50:19.022750 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:50:19 crc kubenswrapper[4835]: I0201 07:50:19.022790 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" gracePeriod=30 Feb 01 07:50:19 crc kubenswrapper[4835]: I0201 07:50:19.024180 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:19 crc kubenswrapper[4835]: E0201 07:50:19.377869 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:20 crc kubenswrapper[4835]: I0201 07:50:20.003791 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" exitCode=0 Feb 01 07:50:20 crc kubenswrapper[4835]: I0201 07:50:20.003894 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074"} Feb 01 07:50:20 crc kubenswrapper[4835]: I0201 07:50:20.003953 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415"} Feb 01 07:50:20 crc kubenswrapper[4835]: I0201 07:50:20.003989 4835 scope.go:117] "RemoveContainer" containerID="84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9" Feb 01 07:50:20 crc kubenswrapper[4835]: I0201 07:50:20.004364 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:50:20 crc kubenswrapper[4835]: I0201 07:50:20.004777 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:50:20 crc kubenswrapper[4835]: E0201 07:50:20.005019 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:21 crc kubenswrapper[4835]: I0201 07:50:21.017841 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" exitCode=1 Feb 01 07:50:21 crc kubenswrapper[4835]: I0201 07:50:21.018174 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415"} Feb 01 07:50:21 crc kubenswrapper[4835]: I0201 07:50:21.018371 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:50:21 crc kubenswrapper[4835]: I0201 07:50:21.019601 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:50:21 crc kubenswrapper[4835]: I0201 07:50:21.019642 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:50:21 crc kubenswrapper[4835]: E0201 07:50:21.020290 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:22 crc kubenswrapper[4835]: I0201 07:50:22.019360 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:50:22 crc kubenswrapper[4835]: I0201 07:50:22.031980 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:50:22 crc kubenswrapper[4835]: I0201 07:50:22.032062 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:50:22 crc kubenswrapper[4835]: E0201 07:50:22.032572 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:23 crc kubenswrapper[4835]: I0201 07:50:23.044816 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:50:23 crc kubenswrapper[4835]: I0201 07:50:23.044864 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:50:23 crc kubenswrapper[4835]: E0201 07:50:23.045515 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.096978 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66" exitCode=1 Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.097094 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66"} Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.097476 4835 scope.go:117] "RemoveContainer" containerID="fe6bd8e84d6ed5717736c29de8d74a04026b73df093d00dea9d9e4f338cae07c" Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.099052 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.099170 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.099318 4835 scope.go:117] "RemoveContainer" containerID="9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66" Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.099389 4835 scope.go:117] "RemoveContainer" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.099610 4835 scope.go:117] "RemoveContainer" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" Feb 01 07:50:28 crc kubenswrapper[4835]: I0201 07:50:28.112773 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" exitCode=1 Feb 01 07:50:28 crc kubenswrapper[4835]: I0201 07:50:28.113376 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c"} Feb 01 07:50:28 crc kubenswrapper[4835]: I0201 07:50:28.113401 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb"} Feb 01 07:50:28 crc kubenswrapper[4835]: I0201 07:50:28.113435 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9"} Feb 01 07:50:28 crc kubenswrapper[4835]: I0201 07:50:28.113461 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:50:28 crc kubenswrapper[4835]: E0201 07:50:28.317790 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.136778 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" exitCode=1 Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.136913 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" exitCode=1 Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.136938 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" exitCode=1 Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.136859 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c"} Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.136992 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb"} Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.137020 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9"} Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.137049 4835 scope.go:117] "RemoveContainer" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.138453 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.138884 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.139107 4835 scope.go:117] "RemoveContainer" containerID="9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.139147 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.139255 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:50:29 crc kubenswrapper[4835]: E0201 07:50:29.153015 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.211990 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.258510 4835 scope.go:117] "RemoveContainer" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.158578 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.159139 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.159293 4835 scope.go:117] "RemoveContainer" containerID="9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.159309 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.159368 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:50:30 crc kubenswrapper[4835]: E0201 07:50:30.159940 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.567156 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.567487 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.567733 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:50:30 crc kubenswrapper[4835]: E0201 07:50:30.568102 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:30 crc kubenswrapper[4835]: E0201 07:50:30.568599 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:50:36 crc kubenswrapper[4835]: I0201 07:50:36.567497 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:50:36 crc kubenswrapper[4835]: I0201 07:50:36.568282 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:50:36 crc kubenswrapper[4835]: E0201 07:50:36.568719 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:43 crc kubenswrapper[4835]: I0201 07:50:43.567846 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:50:43 crc kubenswrapper[4835]: I0201 07:50:43.568637 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:50:43 crc kubenswrapper[4835]: I0201 07:50:43.568791 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:50:43 crc kubenswrapper[4835]: I0201 07:50:43.568850 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:43 crc kubenswrapper[4835]: I0201 07:50:43.568890 4835 scope.go:117] "RemoveContainer" containerID="9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66" Feb 01 07:50:43 crc kubenswrapper[4835]: I0201 07:50:43.568909 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:50:43 crc kubenswrapper[4835]: I0201 07:50:43.569000 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:50:43 crc kubenswrapper[4835]: E0201 07:50:43.778091 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:43 crc kubenswrapper[4835]: E0201 07:50:43.847876 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.299355 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"e991ff7c18a37a0b32c5c7dec8751c4f0440dc18ad1b6d1b714cb78fcfbe4dcc"} Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.299649 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.299961 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:44 crc kubenswrapper[4835]: E0201 07:50:44.300267 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.318893 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"7e996cce6d01e8d3083a03c89344fa5e2e5fa37ac118b8a6c148b0b9b7355967"} Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.319671 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.319752 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.319864 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.319905 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:50:44 crc kubenswrapper[4835]: E0201 07:50:44.320206 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.566568 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:50:44 crc kubenswrapper[4835]: E0201 07:50:44.567081 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:50:45 crc kubenswrapper[4835]: I0201 07:50:45.110488 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:50:45 crc kubenswrapper[4835]: E0201 07:50:45.110656 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:50:45 crc kubenswrapper[4835]: E0201 07:50:45.110801 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:52:47.110742513 +0000 UTC m=+1840.231178987 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:50:45 crc kubenswrapper[4835]: I0201 07:50:45.328924 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:45 crc kubenswrapper[4835]: E0201 07:50:45.329268 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:46 crc kubenswrapper[4835]: E0201 07:50:46.699197 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:50:47 crc kubenswrapper[4835]: I0201 07:50:47.346514 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:50:48 crc kubenswrapper[4835]: I0201 07:50:48.539810 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:48 crc kubenswrapper[4835]: I0201 07:50:48.566569 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:50:48 crc kubenswrapper[4835]: I0201 07:50:48.566627 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:50:48 crc kubenswrapper[4835]: E0201 07:50:48.567062 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:51 crc kubenswrapper[4835]: I0201 07:50:51.537727 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:52 crc kubenswrapper[4835]: I0201 07:50:52.538137 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:54 crc kubenswrapper[4835]: I0201 07:50:54.538217 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:54 crc kubenswrapper[4835]: I0201 07:50:54.538312 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:50:54 crc kubenswrapper[4835]: I0201 07:50:54.539208 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"e991ff7c18a37a0b32c5c7dec8751c4f0440dc18ad1b6d1b714cb78fcfbe4dcc"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:50:54 crc kubenswrapper[4835]: I0201 07:50:54.539241 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:54 crc kubenswrapper[4835]: I0201 07:50:54.539278 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://e991ff7c18a37a0b32c5c7dec8751c4f0440dc18ad1b6d1b714cb78fcfbe4dcc" gracePeriod=30 Feb 01 07:50:54 crc kubenswrapper[4835]: I0201 07:50:54.541280 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:54 crc kubenswrapper[4835]: E0201 07:50:54.871977 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:55 crc kubenswrapper[4835]: I0201 07:50:55.433384 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="e991ff7c18a37a0b32c5c7dec8751c4f0440dc18ad1b6d1b714cb78fcfbe4dcc" exitCode=0 Feb 01 07:50:55 crc kubenswrapper[4835]: I0201 07:50:55.433487 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"e991ff7c18a37a0b32c5c7dec8751c4f0440dc18ad1b6d1b714cb78fcfbe4dcc"} Feb 01 07:50:55 crc kubenswrapper[4835]: I0201 07:50:55.433528 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652"} Feb 01 07:50:55 crc kubenswrapper[4835]: I0201 07:50:55.433558 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:50:55 crc kubenswrapper[4835]: I0201 07:50:55.434380 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:55 crc kubenswrapper[4835]: E0201 07:50:55.434753 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:55 crc kubenswrapper[4835]: I0201 07:50:55.435060 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:50:56 crc kubenswrapper[4835]: I0201 07:50:56.447849 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:56 crc kubenswrapper[4835]: E0201 07:50:56.448327 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:56 crc kubenswrapper[4835]: I0201 07:50:56.566798 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:50:56 crc kubenswrapper[4835]: E0201 07:50:56.567482 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:50:58 crc kubenswrapper[4835]: I0201 07:50:58.567194 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:50:58 crc kubenswrapper[4835]: I0201 07:50:58.567280 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:50:58 crc kubenswrapper[4835]: I0201 07:50:58.567398 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:50:58 crc kubenswrapper[4835]: I0201 07:50:58.567463 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:50:58 crc kubenswrapper[4835]: E0201 07:50:58.567855 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:59 crc kubenswrapper[4835]: I0201 07:50:59.566998 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:50:59 crc kubenswrapper[4835]: I0201 07:50:59.567341 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:50:59 crc kubenswrapper[4835]: E0201 07:50:59.787504 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:00 crc kubenswrapper[4835]: I0201 07:51:00.492161 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37"} Feb 01 07:51:00 crc kubenswrapper[4835]: I0201 07:51:00.492965 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:00 crc kubenswrapper[4835]: E0201 07:51:00.493449 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:00 crc kubenswrapper[4835]: I0201 07:51:00.493560 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:51:00 crc kubenswrapper[4835]: I0201 07:51:00.538340 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:01 crc kubenswrapper[4835]: I0201 07:51:01.501636 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:01 crc kubenswrapper[4835]: E0201 07:51:01.502218 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:02 crc kubenswrapper[4835]: I0201 07:51:02.537460 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:03 crc kubenswrapper[4835]: I0201 07:51:03.537542 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:04 crc kubenswrapper[4835]: I0201 07:51:04.024511 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:05 crc kubenswrapper[4835]: I0201 07:51:05.022287 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:06 crc kubenswrapper[4835]: I0201 07:51:06.537570 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:06 crc kubenswrapper[4835]: I0201 07:51:06.537705 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:51:06 crc kubenswrapper[4835]: I0201 07:51:06.539008 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:51:06 crc kubenswrapper[4835]: I0201 07:51:06.539062 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:51:06 crc kubenswrapper[4835]: I0201 07:51:06.539216 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" gracePeriod=30 Feb 01 07:51:06 crc kubenswrapper[4835]: I0201 07:51:06.541141 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:06 crc kubenswrapper[4835]: E0201 07:51:06.665387 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.021152 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.536537 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.95:8080/healthcheck\": dial tcp 10.217.0.95:8080: connect: connection refused" Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.563368 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" exitCode=0 Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.563483 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652"} Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.563560 4835 scope.go:117] "RemoveContainer" containerID="e991ff7c18a37a0b32c5c7dec8751c4f0440dc18ad1b6d1b714cb78fcfbe4dcc" Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.564585 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.564668 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:51:07 crc kubenswrapper[4835]: E0201 07:51:07.565098 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.567630 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:51:07 crc kubenswrapper[4835]: E0201 07:51:07.568134 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.020253 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.020606 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.021569 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.022196 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.022240 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.022271 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" gracePeriod=30 Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.023964 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:10 crc kubenswrapper[4835]: E0201 07:51:10.150614 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.567288 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.567383 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.567544 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.567602 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:51:10 crc kubenswrapper[4835]: E0201 07:51:10.568140 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.606756 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" exitCode=0 Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.606831 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37"} Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.606900 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.607643 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.607690 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:10 crc kubenswrapper[4835]: E0201 07:51:10.607984 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:14 crc kubenswrapper[4835]: I0201 07:51:14.657106 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="f37851d985a30497d9ff14d46c11d28293ba0304df3383819707502eddde0548" exitCode=1 Feb 01 07:51:14 crc kubenswrapper[4835]: I0201 07:51:14.657144 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"f37851d985a30497d9ff14d46c11d28293ba0304df3383819707502eddde0548"} Feb 01 07:51:14 crc kubenswrapper[4835]: I0201 07:51:14.658928 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:51:14 crc kubenswrapper[4835]: I0201 07:51:14.659039 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:51:14 crc kubenswrapper[4835]: I0201 07:51:14.659083 4835 scope.go:117] "RemoveContainer" containerID="f37851d985a30497d9ff14d46c11d28293ba0304df3383819707502eddde0548" Feb 01 07:51:14 crc kubenswrapper[4835]: I0201 07:51:14.659231 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:51:14 crc kubenswrapper[4835]: I0201 07:51:14.659299 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:51:14 crc kubenswrapper[4835]: E0201 07:51:14.902531 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:51:15 crc kubenswrapper[4835]: I0201 07:51:15.678025 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"6fdbb0ab768d00deff13ea9eb6be0e0c1db12da04c0cfc661beeecd91e511120"} Feb 01 07:51:15 crc kubenswrapper[4835]: I0201 07:51:15.679249 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:51:15 crc kubenswrapper[4835]: I0201 07:51:15.679357 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:51:15 crc kubenswrapper[4835]: I0201 07:51:15.679522 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:51:15 crc kubenswrapper[4835]: I0201 07:51:15.679590 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:51:15 crc kubenswrapper[4835]: E0201 07:51:15.679909 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:51:21 crc kubenswrapper[4835]: I0201 07:51:21.567431 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:51:21 crc kubenswrapper[4835]: I0201 07:51:21.567733 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:51:21 crc kubenswrapper[4835]: I0201 07:51:21.567837 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:51:21 crc kubenswrapper[4835]: E0201 07:51:21.568031 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:51:21 crc kubenswrapper[4835]: E0201 07:51:21.568584 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:51:22 crc kubenswrapper[4835]: I0201 07:51:22.568058 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:22 crc kubenswrapper[4835]: I0201 07:51:22.568548 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:22 crc kubenswrapper[4835]: E0201 07:51:22.568991 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:26 crc kubenswrapper[4835]: I0201 07:51:26.567148 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:51:26 crc kubenswrapper[4835]: I0201 07:51:26.567735 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:51:26 crc kubenswrapper[4835]: I0201 07:51:26.567989 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:51:26 crc kubenswrapper[4835]: I0201 07:51:26.568121 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:51:26 crc kubenswrapper[4835]: E0201 07:51:26.568769 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:51:33 crc kubenswrapper[4835]: I0201 07:51:33.567541 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:51:33 crc kubenswrapper[4835]: I0201 07:51:33.568196 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:51:33 crc kubenswrapper[4835]: E0201 07:51:33.568597 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:51:34 crc kubenswrapper[4835]: I0201 07:51:34.566701 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:51:34 crc kubenswrapper[4835]: E0201 07:51:34.567014 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:51:37 crc kubenswrapper[4835]: I0201 07:51:37.572389 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:51:37 crc kubenswrapper[4835]: I0201 07:51:37.572784 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:37 crc kubenswrapper[4835]: I0201 07:51:37.572810 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:37 crc kubenswrapper[4835]: I0201 07:51:37.572862 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:51:37 crc kubenswrapper[4835]: E0201 07:51:37.573018 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:37 crc kubenswrapper[4835]: I0201 07:51:37.573050 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:51:37 crc kubenswrapper[4835]: I0201 07:51:37.573118 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:51:37 crc kubenswrapper[4835]: E0201 07:51:37.573675 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:51:46 crc kubenswrapper[4835]: I0201 07:51:46.567971 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:51:46 crc kubenswrapper[4835]: I0201 07:51:46.568603 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:51:46 crc kubenswrapper[4835]: E0201 07:51:46.568964 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:51:48 crc kubenswrapper[4835]: I0201 07:51:48.567650 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:48 crc kubenswrapper[4835]: I0201 07:51:48.567977 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:48 crc kubenswrapper[4835]: E0201 07:51:48.776940 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:49 crc kubenswrapper[4835]: I0201 07:51:49.006183 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887"} Feb 01 07:51:49 crc kubenswrapper[4835]: I0201 07:51:49.006931 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:49 crc kubenswrapper[4835]: E0201 07:51:49.007272 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:49 crc kubenswrapper[4835]: I0201 07:51:49.007558 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:51:49 crc kubenswrapper[4835]: I0201 07:51:49.567940 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:51:49 crc kubenswrapper[4835]: E0201 07:51:49.568649 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:51:50 crc kubenswrapper[4835]: I0201 07:51:50.019602 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" exitCode=1 Feb 01 07:51:50 crc kubenswrapper[4835]: I0201 07:51:50.019662 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887"} Feb 01 07:51:50 crc kubenswrapper[4835]: I0201 07:51:50.020542 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:50 crc kubenswrapper[4835]: I0201 07:51:50.020790 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:50 crc kubenswrapper[4835]: I0201 07:51:50.020982 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:51:50 crc kubenswrapper[4835]: E0201 07:51:50.021768 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:51 crc kubenswrapper[4835]: I0201 07:51:51.036604 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:51 crc kubenswrapper[4835]: I0201 07:51:51.036660 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:51:51 crc kubenswrapper[4835]: E0201 07:51:51.037078 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:51 crc kubenswrapper[4835]: I0201 07:51:51.568063 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:51:51 crc kubenswrapper[4835]: I0201 07:51:51.568222 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:51:51 crc kubenswrapper[4835]: I0201 07:51:51.568459 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:51:51 crc kubenswrapper[4835]: I0201 07:51:51.568583 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:51:52 crc kubenswrapper[4835]: I0201 07:51:52.019471 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:51:52 crc kubenswrapper[4835]: I0201 07:51:52.066264 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63"} Feb 01 07:51:52 crc kubenswrapper[4835]: I0201 07:51:52.066314 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7"} Feb 01 07:51:52 crc kubenswrapper[4835]: I0201 07:51:52.067768 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:52 crc kubenswrapper[4835]: I0201 07:51:52.067967 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:51:52 crc kubenswrapper[4835]: E0201 07:51:52.068571 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090464 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" exitCode=1 Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090539 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" exitCode=1 Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090539 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63"} Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090602 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7"} Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090621 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150"} Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090643 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090561 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" exitCode=1 Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090709 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" exitCode=1 Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090750 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b"} Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.091782 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.091985 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.092248 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.092360 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:51:53 crc kubenswrapper[4835]: E0201 07:51:53.092959 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.155754 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.205556 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.258750 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:51:54 crc kubenswrapper[4835]: I0201 07:51:54.111074 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:51:54 crc kubenswrapper[4835]: I0201 07:51:54.111193 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:51:54 crc kubenswrapper[4835]: I0201 07:51:54.111372 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:51:54 crc kubenswrapper[4835]: I0201 07:51:54.111467 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:51:54 crc kubenswrapper[4835]: E0201 07:51:54.111921 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:52:00 crc kubenswrapper[4835]: I0201 07:52:00.567338 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:52:00 crc kubenswrapper[4835]: E0201 07:52:00.568556 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:52:01 crc kubenswrapper[4835]: I0201 07:52:01.567708 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:52:01 crc kubenswrapper[4835]: I0201 07:52:01.567804 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:52:01 crc kubenswrapper[4835]: E0201 07:52:01.568294 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:52:06 crc kubenswrapper[4835]: I0201 07:52:06.567034 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:52:06 crc kubenswrapper[4835]: I0201 07:52:06.567452 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:06 crc kubenswrapper[4835]: I0201 07:52:06.567659 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:52:06 crc kubenswrapper[4835]: I0201 07:52:06.567776 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:52:06 crc kubenswrapper[4835]: E0201 07:52:06.567877 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:06 crc kubenswrapper[4835]: I0201 07:52:06.567958 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:52:06 crc kubenswrapper[4835]: I0201 07:52:06.568024 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:52:06 crc kubenswrapper[4835]: E0201 07:52:06.568583 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:52:14 crc kubenswrapper[4835]: I0201 07:52:14.567622 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:52:14 crc kubenswrapper[4835]: E0201 07:52:14.568639 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:52:16 crc kubenswrapper[4835]: I0201 07:52:16.567990 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:52:16 crc kubenswrapper[4835]: I0201 07:52:16.568053 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:52:16 crc kubenswrapper[4835]: E0201 07:52:16.568466 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:52:17 crc kubenswrapper[4835]: I0201 07:52:17.576397 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:52:17 crc kubenswrapper[4835]: I0201 07:52:17.576936 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:17 crc kubenswrapper[4835]: E0201 07:52:17.577523 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:18 crc kubenswrapper[4835]: I0201 07:52:18.568460 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:52:18 crc kubenswrapper[4835]: I0201 07:52:18.568947 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:52:18 crc kubenswrapper[4835]: I0201 07:52:18.569265 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:52:18 crc kubenswrapper[4835]: I0201 07:52:18.569538 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:52:18 crc kubenswrapper[4835]: E0201 07:52:18.570459 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:52:29 crc kubenswrapper[4835]: I0201 07:52:29.568132 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.499005 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"d638555a7804d9b2393754d14295137aca5e115889b061826bbd0511ac275ab7"} Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.508170 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="6fdbb0ab768d00deff13ea9eb6be0e0c1db12da04c0cfc661beeecd91e511120" exitCode=1 Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.508216 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"6fdbb0ab768d00deff13ea9eb6be0e0c1db12da04c0cfc661beeecd91e511120"} Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.508273 4835 scope.go:117] "RemoveContainer" containerID="f37851d985a30497d9ff14d46c11d28293ba0304df3383819707502eddde0548" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.509943 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.510084 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.510134 4835 scope.go:117] "RemoveContainer" containerID="6fdbb0ab768d00deff13ea9eb6be0e0c1db12da04c0cfc661beeecd91e511120" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.510286 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.510357 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:52:30 crc kubenswrapper[4835]: E0201 07:52:30.511027 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.582670 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.582699 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:30 crc kubenswrapper[4835]: E0201 07:52:30.761542 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:31 crc kubenswrapper[4835]: I0201 07:52:31.521392 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a"} Feb 01 07:52:31 crc kubenswrapper[4835]: I0201 07:52:31.522078 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:52:31 crc kubenswrapper[4835]: I0201 07:52:31.522577 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:31 crc kubenswrapper[4835]: E0201 07:52:31.523001 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:31 crc kubenswrapper[4835]: I0201 07:52:31.569118 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:52:31 crc kubenswrapper[4835]: I0201 07:52:31.569175 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:52:31 crc kubenswrapper[4835]: E0201 07:52:31.569646 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:52:32 crc kubenswrapper[4835]: I0201 07:52:32.542703 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:32 crc kubenswrapper[4835]: E0201 07:52:32.543106 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:35 crc kubenswrapper[4835]: I0201 07:52:35.023500 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:52:37 crc kubenswrapper[4835]: I0201 07:52:37.024383 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:52:40 crc kubenswrapper[4835]: I0201 07:52:40.021823 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:52:40 crc kubenswrapper[4835]: I0201 07:52:40.022197 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:52:41 crc kubenswrapper[4835]: I0201 07:52:41.568348 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:52:41 crc kubenswrapper[4835]: I0201 07:52:41.568612 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:52:41 crc kubenswrapper[4835]: I0201 07:52:41.568676 4835 scope.go:117] "RemoveContainer" containerID="6fdbb0ab768d00deff13ea9eb6be0e0c1db12da04c0cfc661beeecd91e511120" Feb 01 07:52:41 crc kubenswrapper[4835]: I0201 07:52:41.568837 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:52:41 crc kubenswrapper[4835]: I0201 07:52:41.568922 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:52:41 crc kubenswrapper[4835]: E0201 07:52:41.769124 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:52:42 crc kubenswrapper[4835]: I0201 07:52:42.666725 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482"} Feb 01 07:52:42 crc kubenswrapper[4835]: I0201 07:52:42.670772 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:52:42 crc kubenswrapper[4835]: I0201 07:52:42.670915 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:52:42 crc kubenswrapper[4835]: I0201 07:52:42.671106 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:52:42 crc kubenswrapper[4835]: I0201 07:52:42.671175 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:52:42 crc kubenswrapper[4835]: E0201 07:52:42.671917 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.021602 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.021715 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.022671 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.022712 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.022760 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" gracePeriod=30 Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.025265 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:52:43 crc kubenswrapper[4835]: E0201 07:52:43.150692 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.682496 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" exitCode=0 Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.682615 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a"} Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.683203 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.683998 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.684071 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:43 crc kubenswrapper[4835]: E0201 07:52:43.684610 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:45 crc kubenswrapper[4835]: I0201 07:52:45.567545 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:52:45 crc kubenswrapper[4835]: I0201 07:52:45.567583 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:52:45 crc kubenswrapper[4835]: E0201 07:52:45.568041 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:52:47 crc kubenswrapper[4835]: I0201 07:52:47.121208 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:52:47 crc kubenswrapper[4835]: E0201 07:52:47.121536 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:52:47 crc kubenswrapper[4835]: E0201 07:52:47.122205 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:54:49.122167659 +0000 UTC m=+1962.242604093 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:52:50 crc kubenswrapper[4835]: E0201 07:52:50.348250 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:52:50 crc kubenswrapper[4835]: I0201 07:52:50.747310 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:52:54 crc kubenswrapper[4835]: I0201 07:52:54.567249 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:52:54 crc kubenswrapper[4835]: I0201 07:52:54.567457 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:52:54 crc kubenswrapper[4835]: I0201 07:52:54.568068 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:52:54 crc kubenswrapper[4835]: I0201 07:52:54.568158 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:52:54 crc kubenswrapper[4835]: E0201 07:52:54.568755 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:52:55 crc kubenswrapper[4835]: I0201 07:52:55.567101 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:52:55 crc kubenswrapper[4835]: I0201 07:52:55.567164 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:55 crc kubenswrapper[4835]: E0201 07:52:55.569691 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:58 crc kubenswrapper[4835]: I0201 07:52:58.567164 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:52:58 crc kubenswrapper[4835]: I0201 07:52:58.567583 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:52:58 crc kubenswrapper[4835]: E0201 07:52:58.567973 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:53:06 crc kubenswrapper[4835]: I0201 07:53:06.566756 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:53:06 crc kubenswrapper[4835]: I0201 07:53:06.568535 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:53:06 crc kubenswrapper[4835]: E0201 07:53:06.569070 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:53:07 crc kubenswrapper[4835]: I0201 07:53:07.570860 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:53:07 crc kubenswrapper[4835]: I0201 07:53:07.570947 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:53:07 crc kubenswrapper[4835]: I0201 07:53:07.571034 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:53:07 crc kubenswrapper[4835]: I0201 07:53:07.571066 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:53:07 crc kubenswrapper[4835]: E0201 07:53:07.571367 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:53:13 crc kubenswrapper[4835]: I0201 07:53:13.567444 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:53:13 crc kubenswrapper[4835]: I0201 07:53:13.568069 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:53:13 crc kubenswrapper[4835]: E0201 07:53:13.568487 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:53:20 crc kubenswrapper[4835]: I0201 07:53:20.566975 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:53:20 crc kubenswrapper[4835]: I0201 07:53:20.567707 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:53:20 crc kubenswrapper[4835]: E0201 07:53:20.568122 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:53:21 crc kubenswrapper[4835]: I0201 07:53:21.568147 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:53:21 crc kubenswrapper[4835]: I0201 07:53:21.568317 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:53:21 crc kubenswrapper[4835]: I0201 07:53:21.568642 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:53:21 crc kubenswrapper[4835]: I0201 07:53:21.568742 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:53:21 crc kubenswrapper[4835]: E0201 07:53:21.569401 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:53:24 crc kubenswrapper[4835]: I0201 07:53:24.566838 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:53:24 crc kubenswrapper[4835]: I0201 07:53:24.567314 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:53:24 crc kubenswrapper[4835]: E0201 07:53:24.567902 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:53:31 crc kubenswrapper[4835]: I0201 07:53:31.567452 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:53:31 crc kubenswrapper[4835]: I0201 07:53:31.568071 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:53:31 crc kubenswrapper[4835]: E0201 07:53:31.568503 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:53:34 crc kubenswrapper[4835]: I0201 07:53:34.568110 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:53:34 crc kubenswrapper[4835]: I0201 07:53:34.568690 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:53:34 crc kubenswrapper[4835]: I0201 07:53:34.568906 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:53:34 crc kubenswrapper[4835]: I0201 07:53:34.568976 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:53:34 crc kubenswrapper[4835]: E0201 07:53:34.569568 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:53:38 crc kubenswrapper[4835]: I0201 07:53:38.567110 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:53:38 crc kubenswrapper[4835]: I0201 07:53:38.567582 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:53:38 crc kubenswrapper[4835]: E0201 07:53:38.568192 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.246972 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="7e996cce6d01e8d3083a03c89344fa5e2e5fa37ac118b8a6c148b0b9b7355967" exitCode=1 Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.247048 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"7e996cce6d01e8d3083a03c89344fa5e2e5fa37ac118b8a6c148b0b9b7355967"} Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.247127 4835 scope.go:117] "RemoveContainer" containerID="9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66" Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.248261 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.248373 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.248595 4835 scope.go:117] "RemoveContainer" containerID="7e996cce6d01e8d3083a03c89344fa5e2e5fa37ac118b8a6c148b0b9b7355967" Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.248631 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.248695 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:53:41 crc kubenswrapper[4835]: E0201 07:53:41.249459 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:53:46 crc kubenswrapper[4835]: I0201 07:53:46.567155 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:53:46 crc kubenswrapper[4835]: I0201 07:53:46.568029 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:53:46 crc kubenswrapper[4835]: E0201 07:53:46.568525 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:53:51 crc kubenswrapper[4835]: I0201 07:53:51.567946 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:53:51 crc kubenswrapper[4835]: I0201 07:53:51.568540 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:53:51 crc kubenswrapper[4835]: E0201 07:53:51.568958 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:53:53 crc kubenswrapper[4835]: I0201 07:53:53.567235 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:53:53 crc kubenswrapper[4835]: I0201 07:53:53.567409 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:53:53 crc kubenswrapper[4835]: I0201 07:53:53.567637 4835 scope.go:117] "RemoveContainer" containerID="7e996cce6d01e8d3083a03c89344fa5e2e5fa37ac118b8a6c148b0b9b7355967" Feb 01 07:53:53 crc kubenswrapper[4835]: I0201 07:53:53.567651 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:53:53 crc kubenswrapper[4835]: I0201 07:53:53.567721 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:53:53 crc kubenswrapper[4835]: E0201 07:53:53.568476 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:53:59 crc kubenswrapper[4835]: I0201 07:53:59.567177 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:53:59 crc kubenswrapper[4835]: I0201 07:53:59.567643 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:53:59 crc kubenswrapper[4835]: E0201 07:53:59.568301 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:03 crc kubenswrapper[4835]: I0201 07:54:03.567740 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:54:03 crc kubenswrapper[4835]: I0201 07:54:03.568331 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:54:03 crc kubenswrapper[4835]: E0201 07:54:03.568629 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.493221 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482" exitCode=1 Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.493279 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482"} Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.493687 4835 scope.go:117] "RemoveContainer" containerID="6fdbb0ab768d00deff13ea9eb6be0e0c1db12da04c0cfc661beeecd91e511120" Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.494519 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.494571 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.494591 4835 scope.go:117] "RemoveContainer" containerID="429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482" Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.494654 4835 scope.go:117] "RemoveContainer" containerID="7e996cce6d01e8d3083a03c89344fa5e2e5fa37ac118b8a6c148b0b9b7355967" Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.494661 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.494693 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:54:06 crc kubenswrapper[4835]: E0201 07:54:06.676394 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:07 crc kubenswrapper[4835]: I0201 07:54:07.515771 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857"} Feb 01 07:54:07 crc kubenswrapper[4835]: I0201 07:54:07.516933 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:54:07 crc kubenswrapper[4835]: I0201 07:54:07.517005 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:54:07 crc kubenswrapper[4835]: I0201 07:54:07.517032 4835 scope.go:117] "RemoveContainer" containerID="429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482" Feb 01 07:54:07 crc kubenswrapper[4835]: I0201 07:54:07.517110 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:54:07 crc kubenswrapper[4835]: I0201 07:54:07.517151 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:54:07 crc kubenswrapper[4835]: E0201 07:54:07.517507 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:12 crc kubenswrapper[4835]: I0201 07:54:12.568168 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:12 crc kubenswrapper[4835]: I0201 07:54:12.571250 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:54:12 crc kubenswrapper[4835]: E0201 07:54:12.572040 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.603319 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" exitCode=1 Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.603453 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857"} Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.603796 4835 scope.go:117] "RemoveContainer" containerID="7e996cce6d01e8d3083a03c89344fa5e2e5fa37ac118b8a6c148b0b9b7355967" Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.605020 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.605174 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.605235 4835 scope.go:117] "RemoveContainer" containerID="429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482" Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.605359 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.605392 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.605533 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:54:15 crc kubenswrapper[4835]: E0201 07:54:15.606329 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:17 crc kubenswrapper[4835]: I0201 07:54:17.575557 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:54:17 crc kubenswrapper[4835]: I0201 07:54:17.577568 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:54:17 crc kubenswrapper[4835]: E0201 07:54:17.578166 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:54:27 crc kubenswrapper[4835]: I0201 07:54:27.575076 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:27 crc kubenswrapper[4835]: I0201 07:54:27.575887 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:54:27 crc kubenswrapper[4835]: E0201 07:54:27.576276 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.567593 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.568999 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.569100 4835 scope.go:117] "RemoveContainer" containerID="429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.569194 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.569235 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.569973 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.570020 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.570115 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:54:29 crc kubenswrapper[4835]: E0201 07:54:29.804963 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:54:29 crc kubenswrapper[4835]: E0201 07:54:29.805201 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.761715 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" exitCode=1 Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.761801 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f"} Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.761839 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.762623 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.762657 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:54:30 crc kubenswrapper[4835]: E0201 07:54:30.763111 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.775087 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758"} Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.776338 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.776501 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.776673 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.776698 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.776763 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:54:30 crc kubenswrapper[4835]: E0201 07:54:30.777352 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:32 crc kubenswrapper[4835]: I0201 07:54:32.535520 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:54:32 crc kubenswrapper[4835]: I0201 07:54:32.537283 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:54:32 crc kubenswrapper[4835]: I0201 07:54:32.537351 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:54:32 crc kubenswrapper[4835]: E0201 07:54:32.537987 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:54:33 crc kubenswrapper[4835]: I0201 07:54:33.535270 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:54:33 crc kubenswrapper[4835]: I0201 07:54:33.536096 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:54:33 crc kubenswrapper[4835]: I0201 07:54:33.536126 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:54:33 crc kubenswrapper[4835]: E0201 07:54:33.536577 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:54:41 crc kubenswrapper[4835]: I0201 07:54:41.567495 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:41 crc kubenswrapper[4835]: I0201 07:54:41.567963 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:54:41 crc kubenswrapper[4835]: E0201 07:54:41.748619 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:41 crc kubenswrapper[4835]: I0201 07:54:41.897096 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11"} Feb 01 07:54:41 crc kubenswrapper[4835]: I0201 07:54:41.897820 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:54:41 crc kubenswrapper[4835]: I0201 07:54:41.898198 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:41 crc kubenswrapper[4835]: E0201 07:54:41.898715 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.568087 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.568252 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.568350 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.568360 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.568423 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.914013 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89"} Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.916487 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" exitCode=1 Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.916589 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11"} Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.916675 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.917272 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.917368 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:54:42 crc kubenswrapper[4835]: E0201 07:54:42.917658 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.018753 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:54:43 crc kubenswrapper[4835]: E0201 07:54:43.278162 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.928126 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.928876 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:54:43 crc kubenswrapper[4835]: E0201 07:54:43.929239 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.938452 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" exitCode=1 Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.938771 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" exitCode=1 Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.938921 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" exitCode=1 Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.939037 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" exitCode=1 Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.938528 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89"} Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.939330 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536"} Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.939497 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150"} Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.939616 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794"} Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.939420 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.939836 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.940014 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.940216 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.940304 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.940485 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:54:43 crc kubenswrapper[4835]: E0201 07:54:43.941069 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.990942 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.031850 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.074828 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.961212 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.961678 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.961722 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.961874 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.962027 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.962042 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.962128 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:54:44 crc kubenswrapper[4835]: E0201 07:54:44.962774 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:44 crc kubenswrapper[4835]: E0201 07:54:44.963208 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:45 crc kubenswrapper[4835]: I0201 07:54:45.567566 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:54:45 crc kubenswrapper[4835]: I0201 07:54:45.567609 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:54:45 crc kubenswrapper[4835]: E0201 07:54:45.568001 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:54:49 crc kubenswrapper[4835]: I0201 07:54:49.166345 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:54:49 crc kubenswrapper[4835]: E0201 07:54:49.166574 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:54:49 crc kubenswrapper[4835]: E0201 07:54:49.166735 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:56:51.16669729 +0000 UTC m=+2084.287133764 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:54:53 crc kubenswrapper[4835]: E0201 07:54:53.748666 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:54:54 crc kubenswrapper[4835]: I0201 07:54:54.045066 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:54:55 crc kubenswrapper[4835]: I0201 07:54:55.191744 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:54:55 crc kubenswrapper[4835]: I0201 07:54:55.191828 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:54:56 crc kubenswrapper[4835]: I0201 07:54:56.567803 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:56 crc kubenswrapper[4835]: I0201 07:54:56.567855 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:54:56 crc kubenswrapper[4835]: E0201 07:54:56.568263 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:57 crc kubenswrapper[4835]: I0201 07:54:57.576499 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:54:57 crc kubenswrapper[4835]: I0201 07:54:57.576636 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:54:57 crc kubenswrapper[4835]: I0201 07:54:57.576794 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:54:57 crc kubenswrapper[4835]: I0201 07:54:57.576809 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:54:57 crc kubenswrapper[4835]: I0201 07:54:57.576878 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:54:57 crc kubenswrapper[4835]: E0201 07:54:57.793876 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:58 crc kubenswrapper[4835]: I0201 07:54:58.095794 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0"} Feb 01 07:54:58 crc kubenswrapper[4835]: I0201 07:54:58.096885 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:54:58 crc kubenswrapper[4835]: I0201 07:54:58.097025 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:54:58 crc kubenswrapper[4835]: I0201 07:54:58.097224 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:54:58 crc kubenswrapper[4835]: I0201 07:54:58.097291 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:54:58 crc kubenswrapper[4835]: E0201 07:54:58.097923 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:55:00 crc kubenswrapper[4835]: I0201 07:55:00.566992 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:55:00 crc kubenswrapper[4835]: I0201 07:55:00.567048 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:55:00 crc kubenswrapper[4835]: E0201 07:55:00.567323 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:55:08 crc kubenswrapper[4835]: I0201 07:55:08.567076 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:55:08 crc kubenswrapper[4835]: I0201 07:55:08.567610 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:55:08 crc kubenswrapper[4835]: E0201 07:55:08.568058 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:55:09 crc kubenswrapper[4835]: I0201 07:55:09.569134 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:55:09 crc kubenswrapper[4835]: I0201 07:55:09.569706 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:55:09 crc kubenswrapper[4835]: I0201 07:55:09.569888 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:55:09 crc kubenswrapper[4835]: I0201 07:55:09.569953 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:55:09 crc kubenswrapper[4835]: E0201 07:55:09.570477 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:55:12 crc kubenswrapper[4835]: I0201 07:55:12.567089 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:55:12 crc kubenswrapper[4835]: I0201 07:55:12.567133 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:55:12 crc kubenswrapper[4835]: E0201 07:55:12.567510 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:55:22 crc kubenswrapper[4835]: I0201 07:55:22.566394 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:55:22 crc kubenswrapper[4835]: I0201 07:55:22.567605 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:55:22 crc kubenswrapper[4835]: I0201 07:55:22.567685 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:55:22 crc kubenswrapper[4835]: I0201 07:55:22.567747 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:55:22 crc kubenswrapper[4835]: I0201 07:55:22.567828 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:55:22 crc kubenswrapper[4835]: I0201 07:55:22.567862 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:55:22 crc kubenswrapper[4835]: E0201 07:55:22.568073 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:55:22 crc kubenswrapper[4835]: E0201 07:55:22.568102 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:55:25 crc kubenswrapper[4835]: I0201 07:55:25.192916 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:55:25 crc kubenswrapper[4835]: I0201 07:55:25.193759 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:55:26 crc kubenswrapper[4835]: I0201 07:55:26.567016 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:55:26 crc kubenswrapper[4835]: I0201 07:55:26.567058 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:55:26 crc kubenswrapper[4835]: E0201 07:55:26.567255 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:55:35 crc kubenswrapper[4835]: I0201 07:55:35.567599 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:55:35 crc kubenswrapper[4835]: I0201 07:55:35.568297 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:55:35 crc kubenswrapper[4835]: I0201 07:55:35.568451 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:55:35 crc kubenswrapper[4835]: I0201 07:55:35.568509 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:55:35 crc kubenswrapper[4835]: E0201 07:55:35.568861 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:55:37 crc kubenswrapper[4835]: I0201 07:55:37.575678 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:55:37 crc kubenswrapper[4835]: I0201 07:55:37.576129 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:55:37 crc kubenswrapper[4835]: I0201 07:55:37.576180 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:55:37 crc kubenswrapper[4835]: I0201 07:55:37.576222 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:55:37 crc kubenswrapper[4835]: E0201 07:55:37.576684 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:55:37 crc kubenswrapper[4835]: E0201 07:55:37.829252 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:55:38 crc kubenswrapper[4835]: I0201 07:55:38.464597 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553"} Feb 01 07:55:38 crc kubenswrapper[4835]: I0201 07:55:38.464983 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:55:38 crc kubenswrapper[4835]: I0201 07:55:38.466219 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:55:38 crc kubenswrapper[4835]: E0201 07:55:38.466705 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:55:39 crc kubenswrapper[4835]: I0201 07:55:39.483852 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:55:39 crc kubenswrapper[4835]: E0201 07:55:39.485159 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:55:43 crc kubenswrapper[4835]: I0201 07:55:43.025387 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:55:45 crc kubenswrapper[4835]: I0201 07:55:45.021595 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:55:46 crc kubenswrapper[4835]: I0201 07:55:46.021818 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.023066 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.023209 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.024351 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.024387 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.024459 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" gracePeriod=30 Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.027992 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:55:49 crc kubenswrapper[4835]: E0201 07:55:49.152379 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.568274 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.568439 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.568624 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.568691 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:55:49 crc kubenswrapper[4835]: E0201 07:55:49.569269 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.572145 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" exitCode=0 Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.584511 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553"} Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.584596 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.585962 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.586022 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:55:49 crc kubenswrapper[4835]: E0201 07:55:49.586638 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:55:51 crc kubenswrapper[4835]: I0201 07:55:51.567548 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:55:51 crc kubenswrapper[4835]: I0201 07:55:51.567974 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:55:51 crc kubenswrapper[4835]: E0201 07:55:51.568542 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.192372 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.192556 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.192636 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.193745 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d638555a7804d9b2393754d14295137aca5e115889b061826bbd0511ac275ab7"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.193887 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://d638555a7804d9b2393754d14295137aca5e115889b061826bbd0511ac275ab7" gracePeriod=600 Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.633531 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="d638555a7804d9b2393754d14295137aca5e115889b061826bbd0511ac275ab7" exitCode=0 Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.633586 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"d638555a7804d9b2393754d14295137aca5e115889b061826bbd0511ac275ab7"} Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.633617 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8"} Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.633638 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.658262 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" exitCode=1 Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.658340 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758"} Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.658733 4835 scope.go:117] "RemoveContainer" containerID="429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482" Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.659733 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.659851 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.659909 4835 scope.go:117] "RemoveContainer" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.660053 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.660122 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:55:56 crc kubenswrapper[4835]: E0201 07:55:56.660688 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:56:02 crc kubenswrapper[4835]: I0201 07:56:02.566957 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:56:02 crc kubenswrapper[4835]: I0201 07:56:02.567650 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:56:02 crc kubenswrapper[4835]: E0201 07:56:02.568032 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:56:04 crc kubenswrapper[4835]: I0201 07:56:04.567917 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:56:04 crc kubenswrapper[4835]: I0201 07:56:04.568274 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:04 crc kubenswrapper[4835]: E0201 07:56:04.568544 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:09 crc kubenswrapper[4835]: I0201 07:56:09.567631 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:56:09 crc kubenswrapper[4835]: I0201 07:56:09.568514 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:56:09 crc kubenswrapper[4835]: I0201 07:56:09.568570 4835 scope.go:117] "RemoveContainer" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" Feb 01 07:56:09 crc kubenswrapper[4835]: I0201 07:56:09.568719 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:56:09 crc kubenswrapper[4835]: I0201 07:56:09.568807 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:56:09 crc kubenswrapper[4835]: E0201 07:56:09.569511 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:56:15 crc kubenswrapper[4835]: I0201 07:56:15.566813 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:56:15 crc kubenswrapper[4835]: I0201 07:56:15.567612 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:56:15 crc kubenswrapper[4835]: E0201 07:56:15.568037 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:56:19 crc kubenswrapper[4835]: I0201 07:56:19.567200 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:56:19 crc kubenswrapper[4835]: I0201 07:56:19.567569 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:19 crc kubenswrapper[4835]: E0201 07:56:19.786956 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:19 crc kubenswrapper[4835]: I0201 07:56:19.913913 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"88b3ba4b52ecb819474ab7399ac2bb548a98b9b172fad1ad56ac2a0e2a8457e7"} Feb 01 07:56:19 crc kubenswrapper[4835]: I0201 07:56:19.914156 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:56:19 crc kubenswrapper[4835]: I0201 07:56:19.914677 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:19 crc kubenswrapper[4835]: E0201 07:56:19.915078 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:20 crc kubenswrapper[4835]: I0201 07:56:20.566754 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:56:20 crc kubenswrapper[4835]: I0201 07:56:20.566827 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:56:20 crc kubenswrapper[4835]: I0201 07:56:20.566856 4835 scope.go:117] "RemoveContainer" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" Feb 01 07:56:20 crc kubenswrapper[4835]: I0201 07:56:20.566920 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:56:20 crc kubenswrapper[4835]: I0201 07:56:20.566959 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:56:20 crc kubenswrapper[4835]: E0201 07:56:20.567298 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:56:20 crc kubenswrapper[4835]: I0201 07:56:20.923924 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:20 crc kubenswrapper[4835]: E0201 07:56:20.924611 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:24 crc kubenswrapper[4835]: I0201 07:56:24.543307 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:27 crc kubenswrapper[4835]: I0201 07:56:27.537806 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:27 crc kubenswrapper[4835]: I0201 07:56:27.537838 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:27 crc kubenswrapper[4835]: I0201 07:56:27.573944 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:56:27 crc kubenswrapper[4835]: I0201 07:56:27.574293 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:56:27 crc kubenswrapper[4835]: E0201 07:56:27.574775 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:56:30 crc kubenswrapper[4835]: I0201 07:56:30.541717 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:30 crc kubenswrapper[4835]: I0201 07:56:30.542136 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:56:30 crc kubenswrapper[4835]: I0201 07:56:30.542874 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"88b3ba4b52ecb819474ab7399ac2bb548a98b9b172fad1ad56ac2a0e2a8457e7"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:56:30 crc kubenswrapper[4835]: I0201 07:56:30.542899 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:30 crc kubenswrapper[4835]: I0201 07:56:30.542932 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://88b3ba4b52ecb819474ab7399ac2bb548a98b9b172fad1ad56ac2a0e2a8457e7" gracePeriod=30 Feb 01 07:56:30 crc kubenswrapper[4835]: I0201 07:56:30.550451 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:30 crc kubenswrapper[4835]: E0201 07:56:30.852791 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:31 crc kubenswrapper[4835]: I0201 07:56:31.017909 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="88b3ba4b52ecb819474ab7399ac2bb548a98b9b172fad1ad56ac2a0e2a8457e7" exitCode=0 Feb 01 07:56:31 crc kubenswrapper[4835]: I0201 07:56:31.017947 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"88b3ba4b52ecb819474ab7399ac2bb548a98b9b172fad1ad56ac2a0e2a8457e7"} Feb 01 07:56:31 crc kubenswrapper[4835]: I0201 07:56:31.017971 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c"} Feb 01 07:56:31 crc kubenswrapper[4835]: I0201 07:56:31.017986 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:56:31 crc kubenswrapper[4835]: I0201 07:56:31.018590 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:31 crc kubenswrapper[4835]: E0201 07:56:31.018750 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:31 crc kubenswrapper[4835]: I0201 07:56:31.018875 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:56:32 crc kubenswrapper[4835]: I0201 07:56:32.029995 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:32 crc kubenswrapper[4835]: E0201 07:56:32.030608 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:32 crc kubenswrapper[4835]: I0201 07:56:32.566949 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:56:32 crc kubenswrapper[4835]: I0201 07:56:32.567028 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:56:32 crc kubenswrapper[4835]: I0201 07:56:32.567057 4835 scope.go:117] "RemoveContainer" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" Feb 01 07:56:32 crc kubenswrapper[4835]: I0201 07:56:32.567129 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:56:32 crc kubenswrapper[4835]: I0201 07:56:32.567187 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:56:32 crc kubenswrapper[4835]: E0201 07:56:32.567518 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.068713 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" exitCode=1 Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.068815 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0"} Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.069252 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.070330 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.070498 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.070545 4835 scope.go:117] "RemoveContainer" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.070643 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.070676 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.070744 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:56:36 crc kubenswrapper[4835]: E0201 07:56:36.071407 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.538074 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:37 crc kubenswrapper[4835]: I0201 07:56:37.538380 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:38 crc kubenswrapper[4835]: I0201 07:56:38.566519 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:56:38 crc kubenswrapper[4835]: I0201 07:56:38.566895 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:56:38 crc kubenswrapper[4835]: E0201 07:56:38.567245 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:56:39 crc kubenswrapper[4835]: I0201 07:56:39.542029 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:42 crc kubenswrapper[4835]: I0201 07:56:42.537679 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:42 crc kubenswrapper[4835]: I0201 07:56:42.537729 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:42 crc kubenswrapper[4835]: I0201 07:56:42.537829 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:56:42 crc kubenswrapper[4835]: I0201 07:56:42.539070 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:56:42 crc kubenswrapper[4835]: I0201 07:56:42.539107 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:42 crc kubenswrapper[4835]: I0201 07:56:42.539161 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" gracePeriod=30 Feb 01 07:56:42 crc kubenswrapper[4835]: I0201 07:56:42.540201 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:42 crc kubenswrapper[4835]: E0201 07:56:42.661009 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:43 crc kubenswrapper[4835]: I0201 07:56:43.217664 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" exitCode=0 Feb 01 07:56:43 crc kubenswrapper[4835]: I0201 07:56:43.217677 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c"} Feb 01 07:56:43 crc kubenswrapper[4835]: I0201 07:56:43.217745 4835 scope.go:117] "RemoveContainer" containerID="88b3ba4b52ecb819474ab7399ac2bb548a98b9b172fad1ad56ac2a0e2a8457e7" Feb 01 07:56:43 crc kubenswrapper[4835]: I0201 07:56:43.218330 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:56:43 crc kubenswrapper[4835]: I0201 07:56:43.218360 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:43 crc kubenswrapper[4835]: E0201 07:56:43.218712 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:47 crc kubenswrapper[4835]: I0201 07:56:47.577060 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:56:47 crc kubenswrapper[4835]: I0201 07:56:47.577979 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:56:47 crc kubenswrapper[4835]: I0201 07:56:47.578012 4835 scope.go:117] "RemoveContainer" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" Feb 01 07:56:47 crc kubenswrapper[4835]: I0201 07:56:47.578118 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:56:47 crc kubenswrapper[4835]: I0201 07:56:47.578128 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:56:47 crc kubenswrapper[4835]: I0201 07:56:47.578171 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:56:47 crc kubenswrapper[4835]: E0201 07:56:47.768973 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:56:48 crc kubenswrapper[4835]: I0201 07:56:48.276842 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9"} Feb 01 07:56:48 crc kubenswrapper[4835]: I0201 07:56:48.277874 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:56:48 crc kubenswrapper[4835]: I0201 07:56:48.277995 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:56:48 crc kubenswrapper[4835]: I0201 07:56:48.278203 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:56:48 crc kubenswrapper[4835]: I0201 07:56:48.278229 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:56:48 crc kubenswrapper[4835]: I0201 07:56:48.278299 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:56:48 crc kubenswrapper[4835]: E0201 07:56:48.278847 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:56:50 crc kubenswrapper[4835]: I0201 07:56:50.566815 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:56:50 crc kubenswrapper[4835]: I0201 07:56:50.567155 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:56:50 crc kubenswrapper[4835]: E0201 07:56:50.567615 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:56:51 crc kubenswrapper[4835]: E0201 07:56:51.182775 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:56:51 crc kubenswrapper[4835]: E0201 07:56:51.182886 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:58:53.182861591 +0000 UTC m=+2206.303298055 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:56:51 crc kubenswrapper[4835]: I0201 07:56:51.182649 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:56:57 crc kubenswrapper[4835]: E0201 07:56:57.047201 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:56:57 crc kubenswrapper[4835]: I0201 07:56:57.354231 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:56:57 crc kubenswrapper[4835]: I0201 07:56:57.575130 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:56:57 crc kubenswrapper[4835]: I0201 07:56:57.575168 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:57 crc kubenswrapper[4835]: E0201 07:56:57.575437 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:57:00 crc kubenswrapper[4835]: I0201 07:57:00.567068 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:57:00 crc kubenswrapper[4835]: I0201 07:57:00.567362 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:57:00 crc kubenswrapper[4835]: I0201 07:57:00.567452 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:57:00 crc kubenswrapper[4835]: I0201 07:57:00.567460 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:57:00 crc kubenswrapper[4835]: I0201 07:57:00.567491 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:57:00 crc kubenswrapper[4835]: E0201 07:57:00.567773 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:57:04 crc kubenswrapper[4835]: I0201 07:57:04.566837 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:57:04 crc kubenswrapper[4835]: I0201 07:57:04.568897 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:57:04 crc kubenswrapper[4835]: E0201 07:57:04.569665 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:57:08 crc kubenswrapper[4835]: I0201 07:57:08.567240 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:57:08 crc kubenswrapper[4835]: I0201 07:57:08.567741 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:57:08 crc kubenswrapper[4835]: E0201 07:57:08.568141 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:57:12 crc kubenswrapper[4835]: I0201 07:57:12.567307 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:57:12 crc kubenswrapper[4835]: I0201 07:57:12.567663 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:57:12 crc kubenswrapper[4835]: I0201 07:57:12.567739 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:57:12 crc kubenswrapper[4835]: I0201 07:57:12.567746 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:57:12 crc kubenswrapper[4835]: I0201 07:57:12.567775 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:57:12 crc kubenswrapper[4835]: E0201 07:57:12.568044 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:57:17 crc kubenswrapper[4835]: I0201 07:57:17.578858 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:57:17 crc kubenswrapper[4835]: I0201 07:57:17.579220 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:57:17 crc kubenswrapper[4835]: E0201 07:57:17.579637 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:57:22 crc kubenswrapper[4835]: I0201 07:57:22.567628 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:57:22 crc kubenswrapper[4835]: I0201 07:57:22.567668 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:57:22 crc kubenswrapper[4835]: E0201 07:57:22.567961 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:57:24 crc kubenswrapper[4835]: I0201 07:57:24.567042 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:57:24 crc kubenswrapper[4835]: I0201 07:57:24.567558 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:57:24 crc kubenswrapper[4835]: I0201 07:57:24.567710 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:57:24 crc kubenswrapper[4835]: I0201 07:57:24.567725 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:57:24 crc kubenswrapper[4835]: I0201 07:57:24.567789 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:57:24 crc kubenswrapper[4835]: E0201 07:57:24.568323 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:57:31 crc kubenswrapper[4835]: I0201 07:57:31.567036 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:57:31 crc kubenswrapper[4835]: I0201 07:57:31.567748 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:57:31 crc kubenswrapper[4835]: E0201 07:57:31.568223 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:57:35 crc kubenswrapper[4835]: I0201 07:57:35.567122 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:57:35 crc kubenswrapper[4835]: I0201 07:57:35.567717 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:57:35 crc kubenswrapper[4835]: E0201 07:57:35.568094 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:57:36 crc kubenswrapper[4835]: I0201 07:57:36.567754 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:57:36 crc kubenswrapper[4835]: I0201 07:57:36.567861 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:57:36 crc kubenswrapper[4835]: I0201 07:57:36.567961 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:57:36 crc kubenswrapper[4835]: I0201 07:57:36.567970 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:57:36 crc kubenswrapper[4835]: I0201 07:57:36.568012 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:57:36 crc kubenswrapper[4835]: E0201 07:57:36.568548 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.396564 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-58fdf"] Feb 01 07:57:41 crc kubenswrapper[4835]: E0201 07:57:41.397308 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="extract-utilities" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.397320 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="extract-utilities" Feb 01 07:57:41 crc kubenswrapper[4835]: E0201 07:57:41.397343 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="extract-content" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.397349 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="extract-content" Feb 01 07:57:41 crc kubenswrapper[4835]: E0201 07:57:41.397363 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="registry-server" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.397370 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="registry-server" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.397529 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="registry-server" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.398586 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.417977 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-58fdf"] Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.438232 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-catalog-content\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.438492 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dfbh\" (UniqueName: \"kubernetes.io/projected/1abf7dc2-505b-4eb6-836c-fd043219944a-kube-api-access-2dfbh\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.438559 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-utilities\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.540206 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-utilities\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.540379 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-catalog-content\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.540527 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dfbh\" (UniqueName: \"kubernetes.io/projected/1abf7dc2-505b-4eb6-836c-fd043219944a-kube-api-access-2dfbh\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.541219 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-utilities\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.541248 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-catalog-content\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.571366 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dfbh\" (UniqueName: \"kubernetes.io/projected/1abf7dc2-505b-4eb6-836c-fd043219944a-kube-api-access-2dfbh\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.726519 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:42 crc kubenswrapper[4835]: I0201 07:57:42.308273 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-58fdf"] Feb 01 07:57:42 crc kubenswrapper[4835]: W0201 07:57:42.325581 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1abf7dc2_505b_4eb6_836c_fd043219944a.slice/crio-c4e4a369e497853783e3c5fa4192de067cb04dd84f4519b87dc3e490de38fa16 WatchSource:0}: Error finding container c4e4a369e497853783e3c5fa4192de067cb04dd84f4519b87dc3e490de38fa16: Status 404 returned error can't find the container with id c4e4a369e497853783e3c5fa4192de067cb04dd84f4519b87dc3e490de38fa16 Feb 01 07:57:43 crc kubenswrapper[4835]: I0201 07:57:43.189765 4835 generic.go:334] "Generic (PLEG): container finished" podID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerID="856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139" exitCode=0 Feb 01 07:57:43 crc kubenswrapper[4835]: I0201 07:57:43.189856 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58fdf" event={"ID":"1abf7dc2-505b-4eb6-836c-fd043219944a","Type":"ContainerDied","Data":"856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139"} Feb 01 07:57:43 crc kubenswrapper[4835]: I0201 07:57:43.191620 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58fdf" event={"ID":"1abf7dc2-505b-4eb6-836c-fd043219944a","Type":"ContainerStarted","Data":"c4e4a369e497853783e3c5fa4192de067cb04dd84f4519b87dc3e490de38fa16"} Feb 01 07:57:43 crc kubenswrapper[4835]: I0201 07:57:43.192370 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 07:57:43 crc kubenswrapper[4835]: I0201 07:57:43.566692 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:57:43 crc kubenswrapper[4835]: I0201 07:57:43.566716 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:57:43 crc kubenswrapper[4835]: E0201 07:57:43.566913 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:57:45 crc kubenswrapper[4835]: I0201 07:57:45.211865 4835 generic.go:334] "Generic (PLEG): container finished" podID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerID="be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23" exitCode=0 Feb 01 07:57:45 crc kubenswrapper[4835]: I0201 07:57:45.213053 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58fdf" event={"ID":"1abf7dc2-505b-4eb6-836c-fd043219944a","Type":"ContainerDied","Data":"be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23"} Feb 01 07:57:46 crc kubenswrapper[4835]: I0201 07:57:46.226191 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58fdf" event={"ID":"1abf7dc2-505b-4eb6-836c-fd043219944a","Type":"ContainerStarted","Data":"80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732"} Feb 01 07:57:49 crc kubenswrapper[4835]: I0201 07:57:49.566752 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:57:49 crc kubenswrapper[4835]: I0201 07:57:49.567382 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:57:49 crc kubenswrapper[4835]: E0201 07:57:49.567844 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.568344 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.568520 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.568676 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.568689 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.568752 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:57:51 crc kubenswrapper[4835]: E0201 07:57:51.569331 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.727518 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.727803 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.789473 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.805288 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-58fdf" podStartSLOduration=8.235505638 podStartE2EDuration="10.805270554s" podCreationTimestamp="2026-02-01 07:57:41 +0000 UTC" firstStartedPulling="2026-02-01 07:57:43.191801066 +0000 UTC m=+2136.312237540" lastFinishedPulling="2026-02-01 07:57:45.761565982 +0000 UTC m=+2138.882002456" observedRunningTime="2026-02-01 07:57:46.255977679 +0000 UTC m=+2139.376414163" watchObservedRunningTime="2026-02-01 07:57:51.805270554 +0000 UTC m=+2144.925706988" Feb 01 07:57:52 crc kubenswrapper[4835]: I0201 07:57:52.327027 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.191853 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.191941 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.393625 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-58fdf"] Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.394233 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-58fdf" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="registry-server" containerID="cri-o://80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732" gracePeriod=2 Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.778182 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.885958 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-utilities\") pod \"1abf7dc2-505b-4eb6-836c-fd043219944a\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.886079 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dfbh\" (UniqueName: \"kubernetes.io/projected/1abf7dc2-505b-4eb6-836c-fd043219944a-kube-api-access-2dfbh\") pod \"1abf7dc2-505b-4eb6-836c-fd043219944a\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.886175 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-catalog-content\") pod \"1abf7dc2-505b-4eb6-836c-fd043219944a\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.888186 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-utilities" (OuterVolumeSpecName: "utilities") pod "1abf7dc2-505b-4eb6-836c-fd043219944a" (UID: "1abf7dc2-505b-4eb6-836c-fd043219944a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.892559 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abf7dc2-505b-4eb6-836c-fd043219944a-kube-api-access-2dfbh" (OuterVolumeSpecName: "kube-api-access-2dfbh") pod "1abf7dc2-505b-4eb6-836c-fd043219944a" (UID: "1abf7dc2-505b-4eb6-836c-fd043219944a"). InnerVolumeSpecName "kube-api-access-2dfbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.910719 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1abf7dc2-505b-4eb6-836c-fd043219944a" (UID: "1abf7dc2-505b-4eb6-836c-fd043219944a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.988099 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.988374 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.988505 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dfbh\" (UniqueName: \"kubernetes.io/projected/1abf7dc2-505b-4eb6-836c-fd043219944a-kube-api-access-2dfbh\") on node \"crc\" DevicePath \"\"" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.316929 4835 generic.go:334] "Generic (PLEG): container finished" podID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerID="80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732" exitCode=0 Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.316981 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58fdf" event={"ID":"1abf7dc2-505b-4eb6-836c-fd043219944a","Type":"ContainerDied","Data":"80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732"} Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.317012 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58fdf" event={"ID":"1abf7dc2-505b-4eb6-836c-fd043219944a","Type":"ContainerDied","Data":"c4e4a369e497853783e3c5fa4192de067cb04dd84f4519b87dc3e490de38fa16"} Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.317034 4835 scope.go:117] "RemoveContainer" containerID="80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.317108 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.342392 4835 scope.go:117] "RemoveContainer" containerID="be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.366550 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-58fdf"] Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.379557 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-58fdf"] Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.394792 4835 scope.go:117] "RemoveContainer" containerID="856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.433187 4835 scope.go:117] "RemoveContainer" containerID="80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732" Feb 01 07:57:56 crc kubenswrapper[4835]: E0201 07:57:56.433813 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732\": container with ID starting with 80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732 not found: ID does not exist" containerID="80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.433853 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732"} err="failed to get container status \"80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732\": rpc error: code = NotFound desc = could not find container \"80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732\": container with ID starting with 80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732 not found: ID does not exist" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.433884 4835 scope.go:117] "RemoveContainer" containerID="be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23" Feb 01 07:57:56 crc kubenswrapper[4835]: E0201 07:57:56.434261 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23\": container with ID starting with be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23 not found: ID does not exist" containerID="be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.434297 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23"} err="failed to get container status \"be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23\": rpc error: code = NotFound desc = could not find container \"be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23\": container with ID starting with be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23 not found: ID does not exist" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.434311 4835 scope.go:117] "RemoveContainer" containerID="856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139" Feb 01 07:57:56 crc kubenswrapper[4835]: E0201 07:57:56.434634 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139\": container with ID starting with 856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139 not found: ID does not exist" containerID="856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.434658 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139"} err="failed to get container status \"856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139\": rpc error: code = NotFound desc = could not find container \"856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139\": container with ID starting with 856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139 not found: ID does not exist" Feb 01 07:57:57 crc kubenswrapper[4835]: I0201 07:57:57.578780 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" path="/var/lib/kubelet/pods/1abf7dc2-505b-4eb6-836c-fd043219944a/volumes" Feb 01 07:57:58 crc kubenswrapper[4835]: I0201 07:57:58.566817 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:57:58 crc kubenswrapper[4835]: I0201 07:57:58.567082 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:57:58 crc kubenswrapper[4835]: E0201 07:57:58.567382 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:58:04 crc kubenswrapper[4835]: I0201 07:58:04.566826 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:58:04 crc kubenswrapper[4835]: I0201 07:58:04.567211 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:58:04 crc kubenswrapper[4835]: E0201 07:58:04.567672 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:58:05 crc kubenswrapper[4835]: I0201 07:58:05.566851 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:58:05 crc kubenswrapper[4835]: I0201 07:58:05.567284 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:58:05 crc kubenswrapper[4835]: I0201 07:58:05.567889 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:58:05 crc kubenswrapper[4835]: I0201 07:58:05.567963 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:58:05 crc kubenswrapper[4835]: I0201 07:58:05.568126 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:58:05 crc kubenswrapper[4835]: E0201 07:58:05.720868 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:58:06 crc kubenswrapper[4835]: I0201 07:58:06.437467 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a"} Feb 01 07:58:06 crc kubenswrapper[4835]: I0201 07:58:06.438366 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:58:06 crc kubenswrapper[4835]: I0201 07:58:06.438513 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:58:06 crc kubenswrapper[4835]: I0201 07:58:06.438694 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:58:06 crc kubenswrapper[4835]: I0201 07:58:06.438767 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:58:06 crc kubenswrapper[4835]: E0201 07:58:06.439221 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.598659 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6hkcc"] Feb 01 07:58:10 crc kubenswrapper[4835]: E0201 07:58:10.599930 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="extract-utilities" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.599964 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="extract-utilities" Feb 01 07:58:10 crc kubenswrapper[4835]: E0201 07:58:10.600017 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="extract-content" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.600035 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="extract-content" Feb 01 07:58:10 crc kubenswrapper[4835]: E0201 07:58:10.600060 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="registry-server" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.600072 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="registry-server" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.600490 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="registry-server" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.602810 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.614067 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6hkcc"] Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.738896 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-catalog-content\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.739039 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-utilities\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.739076 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lscrc\" (UniqueName: \"kubernetes.io/projected/d226fc0e-17db-48d6-8c00-dc71f542186d-kube-api-access-lscrc\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.839887 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-catalog-content\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.839981 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-utilities\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.840008 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lscrc\" (UniqueName: \"kubernetes.io/projected/d226fc0e-17db-48d6-8c00-dc71f542186d-kube-api-access-lscrc\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.840580 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-catalog-content\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.840722 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-utilities\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.865348 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lscrc\" (UniqueName: \"kubernetes.io/projected/d226fc0e-17db-48d6-8c00-dc71f542186d-kube-api-access-lscrc\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.932294 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:11 crc kubenswrapper[4835]: I0201 07:58:11.423453 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6hkcc"] Feb 01 07:58:11 crc kubenswrapper[4835]: W0201 07:58:11.433536 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd226fc0e_17db_48d6_8c00_dc71f542186d.slice/crio-0d75697c6ed7b1b3718093a971509729e46e875d4e4dbcf17374b7e01fcdc536 WatchSource:0}: Error finding container 0d75697c6ed7b1b3718093a971509729e46e875d4e4dbcf17374b7e01fcdc536: Status 404 returned error can't find the container with id 0d75697c6ed7b1b3718093a971509729e46e875d4e4dbcf17374b7e01fcdc536 Feb 01 07:58:11 crc kubenswrapper[4835]: I0201 07:58:11.482480 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkcc" event={"ID":"d226fc0e-17db-48d6-8c00-dc71f542186d","Type":"ContainerStarted","Data":"0d75697c6ed7b1b3718093a971509729e46e875d4e4dbcf17374b7e01fcdc536"} Feb 01 07:58:12 crc kubenswrapper[4835]: I0201 07:58:12.496234 4835 generic.go:334] "Generic (PLEG): container finished" podID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerID="7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b" exitCode=0 Feb 01 07:58:12 crc kubenswrapper[4835]: I0201 07:58:12.496347 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkcc" event={"ID":"d226fc0e-17db-48d6-8c00-dc71f542186d","Type":"ContainerDied","Data":"7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b"} Feb 01 07:58:13 crc kubenswrapper[4835]: I0201 07:58:13.506216 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkcc" event={"ID":"d226fc0e-17db-48d6-8c00-dc71f542186d","Type":"ContainerStarted","Data":"b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7"} Feb 01 07:58:13 crc kubenswrapper[4835]: I0201 07:58:13.566667 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:58:13 crc kubenswrapper[4835]: I0201 07:58:13.566701 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:58:13 crc kubenswrapper[4835]: E0201 07:58:13.566963 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:58:14 crc kubenswrapper[4835]: I0201 07:58:14.525551 4835 generic.go:334] "Generic (PLEG): container finished" podID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerID="b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7" exitCode=0 Feb 01 07:58:14 crc kubenswrapper[4835]: I0201 07:58:14.525715 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkcc" event={"ID":"d226fc0e-17db-48d6-8c00-dc71f542186d","Type":"ContainerDied","Data":"b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7"} Feb 01 07:58:15 crc kubenswrapper[4835]: I0201 07:58:15.537170 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkcc" event={"ID":"d226fc0e-17db-48d6-8c00-dc71f542186d","Type":"ContainerStarted","Data":"ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173"} Feb 01 07:58:15 crc kubenswrapper[4835]: I0201 07:58:15.558883 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6hkcc" podStartSLOduration=2.995972553 podStartE2EDuration="5.558865492s" podCreationTimestamp="2026-02-01 07:58:10 +0000 UTC" firstStartedPulling="2026-02-01 07:58:12.499833401 +0000 UTC m=+2165.620269865" lastFinishedPulling="2026-02-01 07:58:15.06272633 +0000 UTC m=+2168.183162804" observedRunningTime="2026-02-01 07:58:15.556027968 +0000 UTC m=+2168.676464412" watchObservedRunningTime="2026-02-01 07:58:15.558865492 +0000 UTC m=+2168.679301936" Feb 01 07:58:18 crc kubenswrapper[4835]: I0201 07:58:18.567273 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:58:18 crc kubenswrapper[4835]: I0201 07:58:18.567970 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:58:18 crc kubenswrapper[4835]: E0201 07:58:18.568461 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:58:20 crc kubenswrapper[4835]: I0201 07:58:20.932698 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:20 crc kubenswrapper[4835]: I0201 07:58:20.932992 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:20 crc kubenswrapper[4835]: I0201 07:58:20.985850 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:21 crc kubenswrapper[4835]: I0201 07:58:21.567920 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:58:21 crc kubenswrapper[4835]: I0201 07:58:21.568068 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:58:21 crc kubenswrapper[4835]: I0201 07:58:21.568264 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:58:21 crc kubenswrapper[4835]: I0201 07:58:21.568336 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:58:21 crc kubenswrapper[4835]: E0201 07:58:21.568951 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:58:21 crc kubenswrapper[4835]: I0201 07:58:21.643717 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:21 crc kubenswrapper[4835]: I0201 07:58:21.695969 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6hkcc"] Feb 01 07:58:23 crc kubenswrapper[4835]: I0201 07:58:23.611517 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6hkcc" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="registry-server" containerID="cri-o://ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173" gracePeriod=2 Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.061433 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.175996 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lscrc\" (UniqueName: \"kubernetes.io/projected/d226fc0e-17db-48d6-8c00-dc71f542186d-kube-api-access-lscrc\") pod \"d226fc0e-17db-48d6-8c00-dc71f542186d\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.176074 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-utilities\") pod \"d226fc0e-17db-48d6-8c00-dc71f542186d\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.176117 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-catalog-content\") pod \"d226fc0e-17db-48d6-8c00-dc71f542186d\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.177563 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-utilities" (OuterVolumeSpecName: "utilities") pod "d226fc0e-17db-48d6-8c00-dc71f542186d" (UID: "d226fc0e-17db-48d6-8c00-dc71f542186d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.188474 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d226fc0e-17db-48d6-8c00-dc71f542186d-kube-api-access-lscrc" (OuterVolumeSpecName: "kube-api-access-lscrc") pod "d226fc0e-17db-48d6-8c00-dc71f542186d" (UID: "d226fc0e-17db-48d6-8c00-dc71f542186d"). InnerVolumeSpecName "kube-api-access-lscrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.237273 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d226fc0e-17db-48d6-8c00-dc71f542186d" (UID: "d226fc0e-17db-48d6-8c00-dc71f542186d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.278498 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.278538 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lscrc\" (UniqueName: \"kubernetes.io/projected/d226fc0e-17db-48d6-8c00-dc71f542186d-kube-api-access-lscrc\") on node \"crc\" DevicePath \"\"" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.278572 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.622606 4835 generic.go:334] "Generic (PLEG): container finished" podID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerID="ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173" exitCode=0 Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.622671 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkcc" event={"ID":"d226fc0e-17db-48d6-8c00-dc71f542186d","Type":"ContainerDied","Data":"ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173"} Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.622711 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkcc" event={"ID":"d226fc0e-17db-48d6-8c00-dc71f542186d","Type":"ContainerDied","Data":"0d75697c6ed7b1b3718093a971509729e46e875d4e4dbcf17374b7e01fcdc536"} Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.622739 4835 scope.go:117] "RemoveContainer" containerID="ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.624386 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.661828 4835 scope.go:117] "RemoveContainer" containerID="b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.688508 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6hkcc"] Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.695137 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6hkcc"] Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.709980 4835 scope.go:117] "RemoveContainer" containerID="7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.747134 4835 scope.go:117] "RemoveContainer" containerID="ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173" Feb 01 07:58:24 crc kubenswrapper[4835]: E0201 07:58:24.747897 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173\": container with ID starting with ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173 not found: ID does not exist" containerID="ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.747999 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173"} err="failed to get container status \"ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173\": rpc error: code = NotFound desc = could not find container \"ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173\": container with ID starting with ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173 not found: ID does not exist" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.748282 4835 scope.go:117] "RemoveContainer" containerID="b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7" Feb 01 07:58:24 crc kubenswrapper[4835]: E0201 07:58:24.748916 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7\": container with ID starting with b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7 not found: ID does not exist" containerID="b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.748971 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7"} err="failed to get container status \"b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7\": rpc error: code = NotFound desc = could not find container \"b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7\": container with ID starting with b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7 not found: ID does not exist" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.749010 4835 scope.go:117] "RemoveContainer" containerID="7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b" Feb 01 07:58:24 crc kubenswrapper[4835]: E0201 07:58:24.749468 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b\": container with ID starting with 7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b not found: ID does not exist" containerID="7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.749512 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b"} err="failed to get container status \"7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b\": rpc error: code = NotFound desc = could not find container \"7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b\": container with ID starting with 7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b not found: ID does not exist" Feb 01 07:58:25 crc kubenswrapper[4835]: I0201 07:58:25.191790 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:58:25 crc kubenswrapper[4835]: I0201 07:58:25.192188 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:58:25 crc kubenswrapper[4835]: I0201 07:58:25.583662 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" path="/var/lib/kubelet/pods/d226fc0e-17db-48d6-8c00-dc71f542186d/volumes" Feb 01 07:58:28 crc kubenswrapper[4835]: I0201 07:58:28.566981 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:58:28 crc kubenswrapper[4835]: I0201 07:58:28.567276 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:58:28 crc kubenswrapper[4835]: E0201 07:58:28.567574 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:58:29 crc kubenswrapper[4835]: I0201 07:58:29.567264 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:58:29 crc kubenswrapper[4835]: I0201 07:58:29.567314 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:58:29 crc kubenswrapper[4835]: E0201 07:58:29.567866 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.639260 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Feb 01 07:58:31 crc kubenswrapper[4835]: E0201 07:58:31.639995 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="registry-server" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.640008 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="registry-server" Feb 01 07:58:31 crc kubenswrapper[4835]: E0201 07:58:31.640032 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="extract-content" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.640038 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="extract-content" Feb 01 07:58:31 crc kubenswrapper[4835]: E0201 07:58:31.640053 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="extract-utilities" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.640060 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="extract-utilities" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.640197 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="registry-server" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.644289 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.652814 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.657752 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.662497 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.669930 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.708466 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.708518 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-lock\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.708620 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s6fv\" (UniqueName: \"kubernetes.io/projected/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-kube-api-access-2s6fv\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.708723 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-etc-swift\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.708779 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-cache\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.810418 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/559d52a7-a172-4c3c-aa13-ba07036485e1-lock\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.810589 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.810676 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-lock\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.810754 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6slpq\" (UniqueName: \"kubernetes.io/projected/559d52a7-a172-4c3c-aa13-ba07036485e1-kube-api-access-6slpq\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.810842 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/559d52a7-a172-4c3c-aa13-ba07036485e1-cache\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.810911 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/559d52a7-a172-4c3c-aa13-ba07036485e1-etc-swift\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.810978 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s6fv\" (UniqueName: \"kubernetes.io/projected/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-kube-api-access-2s6fv\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.811048 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.811093 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") device mount path \"/mnt/openstack/pv02\"" pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.811425 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-etc-swift\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.811501 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-cache\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.811567 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-lock\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.811887 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-cache\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.818883 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-etc-swift\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.836146 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s6fv\" (UniqueName: \"kubernetes.io/projected/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-kube-api-access-2s6fv\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.841920 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.913131 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6slpq\" (UniqueName: \"kubernetes.io/projected/559d52a7-a172-4c3c-aa13-ba07036485e1-kube-api-access-6slpq\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.913226 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/559d52a7-a172-4c3c-aa13-ba07036485e1-cache\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.913259 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/559d52a7-a172-4c3c-aa13-ba07036485e1-etc-swift\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.913305 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.913663 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") device mount path \"/mnt/openstack/pv11\"" pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.913855 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/559d52a7-a172-4c3c-aa13-ba07036485e1-lock\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.913914 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/559d52a7-a172-4c3c-aa13-ba07036485e1-cache\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.914261 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/559d52a7-a172-4c3c-aa13-ba07036485e1-lock\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.919325 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/559d52a7-a172-4c3c-aa13-ba07036485e1-etc-swift\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.929058 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6slpq\" (UniqueName: \"kubernetes.io/projected/559d52a7-a172-4c3c-aa13-ba07036485e1-kube-api-access-6slpq\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.936272 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.974856 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.994563 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:32 crc kubenswrapper[4835]: I0201 07:58:32.432829 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Feb 01 07:58:32 crc kubenswrapper[4835]: W0201 07:58:32.436300 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69f0354b_0c3b_4bc5_8aeb_0ac1b59ff0ef.slice/crio-99ff17610b1e1b9be9681d8ef2fc9cdc8c877d7b83e44644c4d649538db9d9e3 WatchSource:0}: Error finding container 99ff17610b1e1b9be9681d8ef2fc9cdc8c877d7b83e44644c4d649538db9d9e3: Status 404 returned error can't find the container with id 99ff17610b1e1b9be9681d8ef2fc9cdc8c877d7b83e44644c4d649538db9d9e3 Feb 01 07:58:32 crc kubenswrapper[4835]: I0201 07:58:32.496786 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Feb 01 07:58:32 crc kubenswrapper[4835]: W0201 07:58:32.504454 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod559d52a7_a172_4c3c_aa13_ba07036485e1.slice/crio-c50f73e90465a9f88dcf982411fc13d4f79db51edc0156b9af41c0cdc105aa6d WatchSource:0}: Error finding container c50f73e90465a9f88dcf982411fc13d4f79db51edc0156b9af41c0cdc105aa6d: Status 404 returned error can't find the container with id c50f73e90465a9f88dcf982411fc13d4f79db51edc0156b9af41c0cdc105aa6d Feb 01 07:58:32 crc kubenswrapper[4835]: I0201 07:58:32.695311 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"f85c49949ac82b12041efeed7d52e54767d284cb9e6eafea6814ad49ca6946f1"} Feb 01 07:58:32 crc kubenswrapper[4835]: I0201 07:58:32.695691 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"99ff17610b1e1b9be9681d8ef2fc9cdc8c877d7b83e44644c4d649538db9d9e3"} Feb 01 07:58:32 crc kubenswrapper[4835]: I0201 07:58:32.696992 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"7a282d6168e4fc22af855060c7caa16d3d89996ec7fca709802d564c7d5cb413"} Feb 01 07:58:32 crc kubenswrapper[4835]: I0201 07:58:32.697011 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"c50f73e90465a9f88dcf982411fc13d4f79db51edc0156b9af41c0cdc105aa6d"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.567584 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.567966 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.568076 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.568121 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:58:33 crc kubenswrapper[4835]: E0201 07:58:33.568511 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.765227 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="f24e7d5b54eea247b82f3883cc21f16e7aa4caa6af0dc8bcd36658dc6d2f42ef" exitCode=1 Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.765321 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"7b0c5697053a758241dec5fcbbdb0fbd6ae70937550858c99a917a5f0400fb2b"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.766095 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"880d86ec1afbb3c274c816bb68775340a96b7442c4264f02a07362912972f0ed"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.766113 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"da897bf1ebee01d787510151130bec56d89c6ce450cd58a55459700162acb7fa"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.766121 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"120bf873bececff664e4891e3314412e08fc9b1b04b2e1e12619c10e5426be9f"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.766129 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"9159bb986e366f56235b4cb7c77f57f48ee8b200c8fdf1bf6d336ca6aea3ab82"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.766137 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"f24e7d5b54eea247b82f3883cc21f16e7aa4caa6af0dc8bcd36658dc6d2f42ef"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.774444 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="a91c7a061710f366c74c3a85530795267a7148635dce19cc596c818cd545af65" exitCode=1 Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.774510 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"c8528709d62b403036c46d245ab545d72f4c72dae556c7cd913a6c522309b8ad"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.774543 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"14371dcc51f78107e65f9620009454edccc5ceff157a028a257c1b7e1dca7708"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.774572 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"a46c150ba0c28eb42d4905368d97aab983837ab66f9e5a8ae77b8c4533dcad42"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.774584 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"b5152f9e83944918ad324cd62f5a8c4e86da92d9f0ed14b5ed68ca341697958e"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.774597 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"88f1b6e5372263b6fc301fab8360c8f51cba9897427d8e0bd5f56491d1eda3f1"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.774608 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"a91c7a061710f366c74c3a85530795267a7148635dce19cc596c818cd545af65"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789049 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="880d86ec1afbb3c274c816bb68775340a96b7442c4264f02a07362912972f0ed" exitCode=1 Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789106 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"811dcfbbfbce2457a26cf2cfd3d7f241f223d0bd48897b5e6e54984050426b01"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789444 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"880d86ec1afbb3c274c816bb68775340a96b7442c4264f02a07362912972f0ed"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789461 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"3df11a0d072268a6c13e002e29aaa9b6f3829b109ad04c2d2218966599a07de2"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789471 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"e402fc4a3964869718aa6b942855005121c18ce735e11a2370dece42f35ad879"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789479 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"c79ff7541114600de37a172509eea1cb11eec93c315c86aafccf0b9d756e98ea"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789487 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"d5f6ed83daa849a5b58623246bbc78ae3ac07884192fec0a775d0522275c259c"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789494 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"6a0b9399ae0be08a113e4bd1d4305c82b3bcdc7a1a821377deff101aa007dfa8"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789502 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"ca54beac538f6ae6973cb4eb9b4a67af143d74a149de2e45b76be91d795370e6"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795431 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="14371dcc51f78107e65f9620009454edccc5ceff157a028a257c1b7e1dca7708" exitCode=1 Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795468 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"b508dd1e9a5ac0729281d3a6c666b8d546c4995637382ac9002224de0b2bcd99"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795490 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"14371dcc51f78107e65f9620009454edccc5ceff157a028a257c1b7e1dca7708"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795501 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"65dc7266ee11a13fe2c4621a65985c46614f4e16c244282d23b7962db16a47f0"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795509 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"8d711a9565b402e23f2bfa8b8607c93c9b1d461ca1a010915a3e04cede45ad37"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795517 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"700112fad0f4ad91d48c44e77419088f8f3cdd322d0db821e4eac71b3672a4b2"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795525 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"06e6ea20a6e882ef1dd4eaf6f1eff22d0cdb09cb9ba2cd2ac2f288439e8b0497"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795533 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"27c849c644ce0516291fe192a32dc84da3fc8c003447e0320aa9dce182c1c117"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795541 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"1ef81edf87cfd7dc6d9ec352e17d00c7943bb91f54ab196b0175af87c479b6f2"} Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.820452 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="e402fc4a3964869718aa6b942855005121c18ce735e11a2370dece42f35ad879" exitCode=1 Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.820552 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"e402fc4a3964869718aa6b942855005121c18ce735e11a2370dece42f35ad879"} Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.820594 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"c17fd37ba00889658805c2c386c14292b040b366890151f28e353d1695d2920d"} Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.821380 4835 scope.go:117] "RemoveContainer" containerID="f24e7d5b54eea247b82f3883cc21f16e7aa4caa6af0dc8bcd36658dc6d2f42ef" Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.821503 4835 scope.go:117] "RemoveContainer" containerID="880d86ec1afbb3c274c816bb68775340a96b7442c4264f02a07362912972f0ed" Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.821620 4835 scope.go:117] "RemoveContainer" containerID="e402fc4a3964869718aa6b942855005121c18ce735e11a2370dece42f35ad879" Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.831026 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="8d711a9565b402e23f2bfa8b8607c93c9b1d461ca1a010915a3e04cede45ad37" exitCode=1 Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.831076 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"8d711a9565b402e23f2bfa8b8607c93c9b1d461ca1a010915a3e04cede45ad37"} Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.831117 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"6dfca8ce2d35261ebf5b46bba676bf2fb2d120fc3e4f9aa076139877c2d73727"} Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.835338 4835 scope.go:117] "RemoveContainer" containerID="a91c7a061710f366c74c3a85530795267a7148635dce19cc596c818cd545af65" Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.835942 4835 scope.go:117] "RemoveContainer" containerID="14371dcc51f78107e65f9620009454edccc5ceff157a028a257c1b7e1dca7708" Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.836162 4835 scope.go:117] "RemoveContainer" containerID="8d711a9565b402e23f2bfa8b8607c93c9b1d461ca1a010915a3e04cede45ad37" Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.861015 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="3a767af05b048879803243c25351204dd65f0b109ee99b6cd9f8634468705cdf" exitCode=1 Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.861380 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="16296fde47387124d48bc32da647a9e20f77daf7b849305b55b16d2a894462eb" exitCode=1 Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.861083 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"9ba45f9f1b80a6d656b66bc96abf184434dcb51ab0db80ef051a87d6d94cd0a6"} Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.861575 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"3a767af05b048879803243c25351204dd65f0b109ee99b6cd9f8634468705cdf"} Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.861617 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"16296fde47387124d48bc32da647a9e20f77daf7b849305b55b16d2a894462eb"} Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.861649 4835 scope.go:117] "RemoveContainer" containerID="14371dcc51f78107e65f9620009454edccc5ceff157a028a257c1b7e1dca7708" Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.864439 4835 scope.go:117] "RemoveContainer" containerID="16296fde47387124d48bc32da647a9e20f77daf7b849305b55b16d2a894462eb" Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.864650 4835 scope.go:117] "RemoveContainer" containerID="3a767af05b048879803243c25351204dd65f0b109ee99b6cd9f8634468705cdf" Feb 01 07:58:36 crc kubenswrapper[4835]: E0201 07:58:36.865504 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.873798 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="28e6f28289f183b2c5c76f8a8aba3a65e10957d3ab1c4f856ad5f31bae944ec9" exitCode=1 Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.873833 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="02f4e087f95ab22156276839267ba910ea051aa2dbd05679bb92dec6c69321fc" exitCode=1 Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.873844 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="24fbd046a34a1b9d6428d2c5efec8ae997587bfdf02917319174fda0edd686ee" exitCode=1 Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.873868 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"28e6f28289f183b2c5c76f8a8aba3a65e10957d3ab1c4f856ad5f31bae944ec9"} Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.873898 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"02f4e087f95ab22156276839267ba910ea051aa2dbd05679bb92dec6c69321fc"} Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.873912 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"24fbd046a34a1b9d6428d2c5efec8ae997587bfdf02917319174fda0edd686ee"} Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.874978 4835 scope.go:117] "RemoveContainer" containerID="24fbd046a34a1b9d6428d2c5efec8ae997587bfdf02917319174fda0edd686ee" Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.875131 4835 scope.go:117] "RemoveContainer" containerID="02f4e087f95ab22156276839267ba910ea051aa2dbd05679bb92dec6c69321fc" Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.875562 4835 scope.go:117] "RemoveContainer" containerID="28e6f28289f183b2c5c76f8a8aba3a65e10957d3ab1c4f856ad5f31bae944ec9" Feb 01 07:58:36 crc kubenswrapper[4835]: E0201 07:58:36.876099 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.940379 4835 scope.go:117] "RemoveContainer" containerID="a91c7a061710f366c74c3a85530795267a7148635dce19cc596c818cd545af65" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.004722 4835 scope.go:117] "RemoveContainer" containerID="e402fc4a3964869718aa6b942855005121c18ce735e11a2370dece42f35ad879" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.061731 4835 scope.go:117] "RemoveContainer" containerID="880d86ec1afbb3c274c816bb68775340a96b7442c4264f02a07362912972f0ed" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.118544 4835 scope.go:117] "RemoveContainer" containerID="f24e7d5b54eea247b82f3883cc21f16e7aa4caa6af0dc8bcd36658dc6d2f42ef" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.891004 4835 scope.go:117] "RemoveContainer" containerID="24fbd046a34a1b9d6428d2c5efec8ae997587bfdf02917319174fda0edd686ee" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.891528 4835 scope.go:117] "RemoveContainer" containerID="02f4e087f95ab22156276839267ba910ea051aa2dbd05679bb92dec6c69321fc" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.891750 4835 scope.go:117] "RemoveContainer" containerID="28e6f28289f183b2c5c76f8a8aba3a65e10957d3ab1c4f856ad5f31bae944ec9" Feb 01 07:58:37 crc kubenswrapper[4835]: E0201 07:58:37.892237 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.900392 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="9ba45f9f1b80a6d656b66bc96abf184434dcb51ab0db80ef051a87d6d94cd0a6" exitCode=1 Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.900489 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"9ba45f9f1b80a6d656b66bc96abf184434dcb51ab0db80ef051a87d6d94cd0a6"} Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.900537 4835 scope.go:117] "RemoveContainer" containerID="8d711a9565b402e23f2bfa8b8607c93c9b1d461ca1a010915a3e04cede45ad37" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.901401 4835 scope.go:117] "RemoveContainer" containerID="16296fde47387124d48bc32da647a9e20f77daf7b849305b55b16d2a894462eb" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.901955 4835 scope.go:117] "RemoveContainer" containerID="3a767af05b048879803243c25351204dd65f0b109ee99b6cd9f8634468705cdf" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.902195 4835 scope.go:117] "RemoveContainer" containerID="9ba45f9f1b80a6d656b66bc96abf184434dcb51ab0db80ef051a87d6d94cd0a6" Feb 01 07:58:37 crc kubenswrapper[4835]: E0201 07:58:37.902730 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:58:38 crc kubenswrapper[4835]: I0201 07:58:38.920959 4835 scope.go:117] "RemoveContainer" containerID="16296fde47387124d48bc32da647a9e20f77daf7b849305b55b16d2a894462eb" Feb 01 07:58:38 crc kubenswrapper[4835]: I0201 07:58:38.921702 4835 scope.go:117] "RemoveContainer" containerID="3a767af05b048879803243c25351204dd65f0b109ee99b6cd9f8634468705cdf" Feb 01 07:58:38 crc kubenswrapper[4835]: I0201 07:58:38.921920 4835 scope.go:117] "RemoveContainer" containerID="9ba45f9f1b80a6d656b66bc96abf184434dcb51ab0db80ef051a87d6d94cd0a6" Feb 01 07:58:38 crc kubenswrapper[4835]: E0201 07:58:38.922538 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:58:40 crc kubenswrapper[4835]: I0201 07:58:40.566472 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:58:40 crc kubenswrapper[4835]: I0201 07:58:40.567078 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:58:40 crc kubenswrapper[4835]: I0201 07:58:40.567205 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:58:40 crc kubenswrapper[4835]: I0201 07:58:40.567256 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:58:40 crc kubenswrapper[4835]: E0201 07:58:40.567604 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:58:40 crc kubenswrapper[4835]: E0201 07:58:40.567709 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.000349 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" exitCode=1 Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.000444 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9"} Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.001177 4835 scope.go:117] "RemoveContainer" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.002846 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.002967 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.003011 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.003192 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.003271 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:58:46 crc kubenswrapper[4835]: E0201 07:58:46.003817 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:58:48 crc kubenswrapper[4835]: I0201 07:58:48.040206 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="b508dd1e9a5ac0729281d3a6c666b8d546c4995637382ac9002224de0b2bcd99" exitCode=1 Feb 01 07:58:48 crc kubenswrapper[4835]: I0201 07:58:48.040835 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"b508dd1e9a5ac0729281d3a6c666b8d546c4995637382ac9002224de0b2bcd99"} Feb 01 07:58:48 crc kubenswrapper[4835]: I0201 07:58:48.041871 4835 scope.go:117] "RemoveContainer" containerID="16296fde47387124d48bc32da647a9e20f77daf7b849305b55b16d2a894462eb" Feb 01 07:58:48 crc kubenswrapper[4835]: I0201 07:58:48.042009 4835 scope.go:117] "RemoveContainer" containerID="3a767af05b048879803243c25351204dd65f0b109ee99b6cd9f8634468705cdf" Feb 01 07:58:48 crc kubenswrapper[4835]: I0201 07:58:48.042057 4835 scope.go:117] "RemoveContainer" containerID="b508dd1e9a5ac0729281d3a6c666b8d546c4995637382ac9002224de0b2bcd99" Feb 01 07:58:48 crc kubenswrapper[4835]: I0201 07:58:48.042225 4835 scope.go:117] "RemoveContainer" containerID="9ba45f9f1b80a6d656b66bc96abf184434dcb51ab0db80ef051a87d6d94cd0a6" Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.072488 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="8a7d988e4fead16480e78143f90fb219f1ec996d2f4eb08c3871590486cb42df" exitCode=1 Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.073023 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"ce1f0f34a241cb27a3224b9b9bae0ad10e5aec6ab1646b0b75ce2c43459f2cac"} Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.073096 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"6b13e362c79ee7da812063d3725213416d72ec13aecff7de5df3b32c3456d592"} Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.073109 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"8a7d988e4fead16480e78143f90fb219f1ec996d2f4eb08c3871590486cb42df"} Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.073128 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"5c9f183d14fb01deac8350a67dd261ad7f54e4fac26110703f8a0107aaacd47a"} Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.073152 4835 scope.go:117] "RemoveContainer" containerID="3a767af05b048879803243c25351204dd65f0b109ee99b6cd9f8634468705cdf" Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.073048 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="5c9f183d14fb01deac8350a67dd261ad7f54e4fac26110703f8a0107aaacd47a" exitCode=1 Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.074228 4835 scope.go:117] "RemoveContainer" containerID="5c9f183d14fb01deac8350a67dd261ad7f54e4fac26110703f8a0107aaacd47a" Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.074316 4835 scope.go:117] "RemoveContainer" containerID="8a7d988e4fead16480e78143f90fb219f1ec996d2f4eb08c3871590486cb42df" Feb 01 07:58:49 crc kubenswrapper[4835]: E0201 07:58:49.074722 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.152726 4835 scope.go:117] "RemoveContainer" containerID="16296fde47387124d48bc32da647a9e20f77daf7b849305b55b16d2a894462eb" Feb 01 07:58:50 crc kubenswrapper[4835]: I0201 07:58:50.100448 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="ce1f0f34a241cb27a3224b9b9bae0ad10e5aec6ab1646b0b75ce2c43459f2cac" exitCode=1 Feb 01 07:58:50 crc kubenswrapper[4835]: I0201 07:58:50.100517 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"ce1f0f34a241cb27a3224b9b9bae0ad10e5aec6ab1646b0b75ce2c43459f2cac"} Feb 01 07:58:50 crc kubenswrapper[4835]: I0201 07:58:50.100618 4835 scope.go:117] "RemoveContainer" containerID="9ba45f9f1b80a6d656b66bc96abf184434dcb51ab0db80ef051a87d6d94cd0a6" Feb 01 07:58:50 crc kubenswrapper[4835]: I0201 07:58:50.101706 4835 scope.go:117] "RemoveContainer" containerID="5c9f183d14fb01deac8350a67dd261ad7f54e4fac26110703f8a0107aaacd47a" Feb 01 07:58:50 crc kubenswrapper[4835]: I0201 07:58:50.101840 4835 scope.go:117] "RemoveContainer" containerID="8a7d988e4fead16480e78143f90fb219f1ec996d2f4eb08c3871590486cb42df" Feb 01 07:58:50 crc kubenswrapper[4835]: I0201 07:58:50.102045 4835 scope.go:117] "RemoveContainer" containerID="ce1f0f34a241cb27a3224b9b9bae0ad10e5aec6ab1646b0b75ce2c43459f2cac" Feb 01 07:58:50 crc kubenswrapper[4835]: E0201 07:58:50.102639 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:58:51 crc kubenswrapper[4835]: I0201 07:58:51.566961 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:58:51 crc kubenswrapper[4835]: I0201 07:58:51.567266 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:58:51 crc kubenswrapper[4835]: E0201 07:58:51.567519 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:58:52 crc kubenswrapper[4835]: I0201 07:58:52.568029 4835 scope.go:117] "RemoveContainer" containerID="24fbd046a34a1b9d6428d2c5efec8ae997587bfdf02917319174fda0edd686ee" Feb 01 07:58:52 crc kubenswrapper[4835]: I0201 07:58:52.568156 4835 scope.go:117] "RemoveContainer" containerID="02f4e087f95ab22156276839267ba910ea051aa2dbd05679bb92dec6c69321fc" Feb 01 07:58:52 crc kubenswrapper[4835]: I0201 07:58:52.568359 4835 scope.go:117] "RemoveContainer" containerID="28e6f28289f183b2c5c76f8a8aba3a65e10957d3ab1c4f856ad5f31bae944ec9" Feb 01 07:58:53 crc kubenswrapper[4835]: I0201 07:58:53.149294 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="e260d39dc564febe46a9955f5d13a70dc5d82a8d16a7615e31839b708397e999" exitCode=1 Feb 01 07:58:53 crc kubenswrapper[4835]: I0201 07:58:53.149522 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2"} Feb 01 07:58:53 crc kubenswrapper[4835]: I0201 07:58:53.149616 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e"} Feb 01 07:58:53 crc kubenswrapper[4835]: I0201 07:58:53.149629 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"e260d39dc564febe46a9955f5d13a70dc5d82a8d16a7615e31839b708397e999"} Feb 01 07:58:53 crc kubenswrapper[4835]: I0201 07:58:53.149652 4835 scope.go:117] "RemoveContainer" containerID="24fbd046a34a1b9d6428d2c5efec8ae997587bfdf02917319174fda0edd686ee" Feb 01 07:58:53 crc kubenswrapper[4835]: I0201 07:58:53.196834 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:58:53 crc kubenswrapper[4835]: E0201 07:58:53.196952 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:58:53 crc kubenswrapper[4835]: E0201 07:58:53.197244 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:00:55.197227764 +0000 UTC m=+2328.317664188 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.169714 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2" exitCode=1 Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.169887 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e" exitCode=1 Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.169916 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2"} Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.169954 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e"} Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.169984 4835 scope.go:117] "RemoveContainer" containerID="28e6f28289f183b2c5c76f8a8aba3a65e10957d3ab1c4f856ad5f31bae944ec9" Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.170956 4835 scope.go:117] "RemoveContainer" containerID="e260d39dc564febe46a9955f5d13a70dc5d82a8d16a7615e31839b708397e999" Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.171092 4835 scope.go:117] "RemoveContainer" containerID="eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e" Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.171307 4835 scope.go:117] "RemoveContainer" containerID="a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2" Feb 01 07:58:54 crc kubenswrapper[4835]: E0201 07:58:54.172188 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.255531 4835 scope.go:117] "RemoveContainer" containerID="02f4e087f95ab22156276839267ba910ea051aa2dbd05679bb92dec6c69321fc" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.187405 4835 scope.go:117] "RemoveContainer" containerID="e260d39dc564febe46a9955f5d13a70dc5d82a8d16a7615e31839b708397e999" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.187593 4835 scope.go:117] "RemoveContainer" containerID="eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.187817 4835 scope.go:117] "RemoveContainer" containerID="a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2" Feb 01 07:58:55 crc kubenswrapper[4835]: E0201 07:58:55.188280 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.191382 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.191469 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.191525 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.192177 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.192278 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" gracePeriod=600 Feb 01 07:58:55 crc kubenswrapper[4835]: E0201 07:58:55.320526 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.566970 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.567001 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:58:55 crc kubenswrapper[4835]: E0201 07:58:55.567282 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:58:56 crc kubenswrapper[4835]: I0201 07:58:56.214338 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" exitCode=0 Feb 01 07:58:56 crc kubenswrapper[4835]: I0201 07:58:56.214461 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8"} Feb 01 07:58:56 crc kubenswrapper[4835]: I0201 07:58:56.214949 4835 scope.go:117] "RemoveContainer" containerID="d638555a7804d9b2393754d14295137aca5e115889b061826bbd0511ac275ab7" Feb 01 07:58:56 crc kubenswrapper[4835]: I0201 07:58:56.215889 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 07:58:56 crc kubenswrapper[4835]: E0201 07:58:56.216382 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:58:57 crc kubenswrapper[4835]: I0201 07:58:57.579017 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:58:57 crc kubenswrapper[4835]: I0201 07:58:57.579209 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:58:57 crc kubenswrapper[4835]: I0201 07:58:57.579311 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:58:57 crc kubenswrapper[4835]: I0201 07:58:57.579515 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:58:57 crc kubenswrapper[4835]: I0201 07:58:57.579618 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:58:57 crc kubenswrapper[4835]: E0201 07:58:57.580331 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:59:00 crc kubenswrapper[4835]: E0201 07:59:00.355327 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:59:01 crc kubenswrapper[4835]: I0201 07:59:01.263276 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:59:04 crc kubenswrapper[4835]: I0201 07:59:04.567319 4835 scope.go:117] "RemoveContainer" containerID="5c9f183d14fb01deac8350a67dd261ad7f54e4fac26110703f8a0107aaacd47a" Feb 01 07:59:04 crc kubenswrapper[4835]: I0201 07:59:04.567830 4835 scope.go:117] "RemoveContainer" containerID="8a7d988e4fead16480e78143f90fb219f1ec996d2f4eb08c3871590486cb42df" Feb 01 07:59:04 crc kubenswrapper[4835]: I0201 07:59:04.568034 4835 scope.go:117] "RemoveContainer" containerID="ce1f0f34a241cb27a3224b9b9bae0ad10e5aec6ab1646b0b75ce2c43459f2cac" Feb 01 07:59:04 crc kubenswrapper[4835]: E0201 07:59:04.568544 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:59:05 crc kubenswrapper[4835]: I0201 07:59:05.566622 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:05 crc kubenswrapper[4835]: I0201 07:59:05.566668 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:59:05 crc kubenswrapper[4835]: E0201 07:59:05.567046 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:05 crc kubenswrapper[4835]: I0201 07:59:05.568060 4835 scope.go:117] "RemoveContainer" containerID="e260d39dc564febe46a9955f5d13a70dc5d82a8d16a7615e31839b708397e999" Feb 01 07:59:05 crc kubenswrapper[4835]: I0201 07:59:05.568185 4835 scope.go:117] "RemoveContainer" containerID="eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e" Feb 01 07:59:05 crc kubenswrapper[4835]: I0201 07:59:05.568360 4835 scope.go:117] "RemoveContainer" containerID="a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2" Feb 01 07:59:05 crc kubenswrapper[4835]: E0201 07:59:05.568940 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:07 crc kubenswrapper[4835]: I0201 07:59:07.577220 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:59:07 crc kubenswrapper[4835]: I0201 07:59:07.577253 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:59:07 crc kubenswrapper[4835]: E0201 07:59:07.577498 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:59:09 crc kubenswrapper[4835]: I0201 07:59:09.568380 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 07:59:09 crc kubenswrapper[4835]: E0201 07:59:09.568797 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:59:12 crc kubenswrapper[4835]: I0201 07:59:12.568697 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:59:12 crc kubenswrapper[4835]: I0201 07:59:12.569135 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:59:12 crc kubenswrapper[4835]: I0201 07:59:12.569185 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:59:12 crc kubenswrapper[4835]: I0201 07:59:12.569315 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:59:12 crc kubenswrapper[4835]: I0201 07:59:12.569393 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:59:12 crc kubenswrapper[4835]: E0201 07:59:12.570204 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:59:16 crc kubenswrapper[4835]: I0201 07:59:16.567410 4835 scope.go:117] "RemoveContainer" containerID="5c9f183d14fb01deac8350a67dd261ad7f54e4fac26110703f8a0107aaacd47a" Feb 01 07:59:16 crc kubenswrapper[4835]: I0201 07:59:16.567834 4835 scope.go:117] "RemoveContainer" containerID="8a7d988e4fead16480e78143f90fb219f1ec996d2f4eb08c3871590486cb42df" Feb 01 07:59:16 crc kubenswrapper[4835]: I0201 07:59:16.567919 4835 scope.go:117] "RemoveContainer" containerID="ce1f0f34a241cb27a3224b9b9bae0ad10e5aec6ab1646b0b75ce2c43459f2cac" Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.438891 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" exitCode=1 Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.439233 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6"} Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.439273 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972"} Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.439294 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e"} Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.439326 4835 scope.go:117] "RemoveContainer" containerID="5c9f183d14fb01deac8350a67dd261ad7f54e4fac26110703f8a0107aaacd47a" Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.440322 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 07:59:17 crc kubenswrapper[4835]: E0201 07:59:17.441571 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.571293 4835 scope.go:117] "RemoveContainer" containerID="e260d39dc564febe46a9955f5d13a70dc5d82a8d16a7615e31839b708397e999" Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.571364 4835 scope.go:117] "RemoveContainer" containerID="eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e" Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.571473 4835 scope.go:117] "RemoveContainer" containerID="a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.458033 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" exitCode=1 Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.458370 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" exitCode=1 Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.458237 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6"} Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.458449 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972"} Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.458471 4835 scope.go:117] "RemoveContainer" containerID="ce1f0f34a241cb27a3224b9b9bae0ad10e5aec6ab1646b0b75ce2c43459f2cac" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.459044 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.459107 4835 scope.go:117] "RemoveContainer" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.459214 4835 scope.go:117] "RemoveContainer" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" Feb 01 07:59:18 crc kubenswrapper[4835]: E0201 07:59:18.459568 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.474610 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" exitCode=1 Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.474684 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477"} Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.474725 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0"} Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.474744 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e"} Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.476335 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 07:59:18 crc kubenswrapper[4835]: E0201 07:59:18.476890 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.530331 4835 scope.go:117] "RemoveContainer" containerID="8a7d988e4fead16480e78143f90fb219f1ec996d2f4eb08c3871590486cb42df" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.566782 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.566839 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.566970 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.567007 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:59:18 crc kubenswrapper[4835]: E0201 07:59:18.567222 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:59:18 crc kubenswrapper[4835]: E0201 07:59:18.567398 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.588062 4835 scope.go:117] "RemoveContainer" containerID="e260d39dc564febe46a9955f5d13a70dc5d82a8d16a7615e31839b708397e999" Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.490980 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" exitCode=1 Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.491012 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" exitCode=1 Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.491078 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477"} Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.491153 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0"} Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.491178 4835 scope.go:117] "RemoveContainer" containerID="a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2" Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.491800 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.491867 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.491961 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 07:59:19 crc kubenswrapper[4835]: E0201 07:59:19.492226 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.539307 4835 scope.go:117] "RemoveContainer" containerID="eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e" Feb 01 07:59:20 crc kubenswrapper[4835]: I0201 07:59:20.522209 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 07:59:20 crc kubenswrapper[4835]: I0201 07:59:20.523580 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 07:59:20 crc kubenswrapper[4835]: I0201 07:59:20.524014 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 07:59:20 crc kubenswrapper[4835]: E0201 07:59:20.524882 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:22 crc kubenswrapper[4835]: I0201 07:59:22.567705 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 07:59:22 crc kubenswrapper[4835]: E0201 07:59:22.568373 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:59:25 crc kubenswrapper[4835]: I0201 07:59:25.567994 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:59:25 crc kubenswrapper[4835]: I0201 07:59:25.568135 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:59:25 crc kubenswrapper[4835]: I0201 07:59:25.568180 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:59:25 crc kubenswrapper[4835]: I0201 07:59:25.568300 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:59:25 crc kubenswrapper[4835]: I0201 07:59:25.568365 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:59:25 crc kubenswrapper[4835]: E0201 07:59:25.569020 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:59:26 crc kubenswrapper[4835]: I0201 07:59:26.584523 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="6b13e362c79ee7da812063d3725213416d72ec13aecff7de5df3b32c3456d592" exitCode=1 Feb 01 07:59:26 crc kubenswrapper[4835]: I0201 07:59:26.584596 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"6b13e362c79ee7da812063d3725213416d72ec13aecff7de5df3b32c3456d592"} Feb 01 07:59:26 crc kubenswrapper[4835]: I0201 07:59:26.585042 4835 scope.go:117] "RemoveContainer" containerID="b508dd1e9a5ac0729281d3a6c666b8d546c4995637382ac9002224de0b2bcd99" Feb 01 07:59:26 crc kubenswrapper[4835]: I0201 07:59:26.586030 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 07:59:26 crc kubenswrapper[4835]: I0201 07:59:26.586160 4835 scope.go:117] "RemoveContainer" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" Feb 01 07:59:26 crc kubenswrapper[4835]: I0201 07:59:26.586209 4835 scope.go:117] "RemoveContainer" containerID="6b13e362c79ee7da812063d3725213416d72ec13aecff7de5df3b32c3456d592" Feb 01 07:59:26 crc kubenswrapper[4835]: I0201 07:59:26.586357 4835 scope.go:117] "RemoveContainer" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" Feb 01 07:59:26 crc kubenswrapper[4835]: E0201 07:59:26.587285 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:59:32 crc kubenswrapper[4835]: I0201 07:59:32.567469 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:32 crc kubenswrapper[4835]: I0201 07:59:32.568519 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:59:32 crc kubenswrapper[4835]: E0201 07:59:32.568860 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:33 crc kubenswrapper[4835]: I0201 07:59:33.567012 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:59:33 crc kubenswrapper[4835]: I0201 07:59:33.567363 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:59:33 crc kubenswrapper[4835]: E0201 07:59:33.753378 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:59:34 crc kubenswrapper[4835]: I0201 07:59:34.748643 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" exitCode=1 Feb 01 07:59:34 crc kubenswrapper[4835]: I0201 07:59:34.748710 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390"} Feb 01 07:59:34 crc kubenswrapper[4835]: I0201 07:59:34.748768 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:59:34 crc kubenswrapper[4835]: I0201 07:59:34.750278 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:59:34 crc kubenswrapper[4835]: I0201 07:59:34.750367 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 07:59:34 crc kubenswrapper[4835]: E0201 07:59:34.751459 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:59:35 crc kubenswrapper[4835]: I0201 07:59:35.567033 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 07:59:35 crc kubenswrapper[4835]: E0201 07:59:35.567502 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:59:35 crc kubenswrapper[4835]: I0201 07:59:35.569155 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 07:59:35 crc kubenswrapper[4835]: I0201 07:59:35.569824 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 07:59:35 crc kubenswrapper[4835]: I0201 07:59:35.570145 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 07:59:35 crc kubenswrapper[4835]: E0201 07:59:35.570794 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:36 crc kubenswrapper[4835]: I0201 07:59:36.535207 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:59:36 crc kubenswrapper[4835]: I0201 07:59:36.536198 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:59:36 crc kubenswrapper[4835]: I0201 07:59:36.536370 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 07:59:36 crc kubenswrapper[4835]: E0201 07:59:36.537322 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.535279 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.536035 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.536054 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 07:59:37 crc kubenswrapper[4835]: E0201 07:59:37.536490 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.791370 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="c79ff7541114600de37a172509eea1cb11eec93c315c86aafccf0b9d756e98ea" exitCode=1 Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.791442 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"c79ff7541114600de37a172509eea1cb11eec93c315c86aafccf0b9d756e98ea"} Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.792202 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.792356 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.792471 4835 scope.go:117] "RemoveContainer" containerID="c79ff7541114600de37a172509eea1cb11eec93c315c86aafccf0b9d756e98ea" Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.792493 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 07:59:38 crc kubenswrapper[4835]: E0201 07:59:38.033966 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.568284 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.568954 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.569014 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.569196 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.569290 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:59:38 crc kubenswrapper[4835]: E0201 07:59:38.570078 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.815550 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"7281a9d7c1d9d8dc16a17f203151e4b7970267f00d4334688eaa717a6dc5211c"} Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.816598 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.816731 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.816935 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 07:59:38 crc kubenswrapper[4835]: E0201 07:59:38.817503 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.567661 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.567780 4835 scope.go:117] "RemoveContainer" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.567810 4835 scope.go:117] "RemoveContainer" containerID="6b13e362c79ee7da812063d3725213416d72ec13aecff7de5df3b32c3456d592" Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.567913 4835 scope.go:117] "RemoveContainer" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" Feb 01 07:59:40 crc kubenswrapper[4835]: E0201 07:59:40.760902 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.842778 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"6aaadf97ef22242cf5b15148b8cd42d71eb7c275654a87f6591085d77d846827"} Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.843611 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.843788 4835 scope.go:117] "RemoveContainer" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.843966 4835 scope.go:117] "RemoveContainer" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" Feb 01 07:59:40 crc kubenswrapper[4835]: E0201 07:59:40.844579 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:59:47 crc kubenswrapper[4835]: I0201 07:59:47.573241 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:47 crc kubenswrapper[4835]: I0201 07:59:47.573868 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:59:47 crc kubenswrapper[4835]: E0201 07:59:47.736345 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:47 crc kubenswrapper[4835]: I0201 07:59:47.914489 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d"} Feb 01 07:59:47 crc kubenswrapper[4835]: I0201 07:59:47.915049 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:59:47 crc kubenswrapper[4835]: I0201 07:59:47.915399 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:47 crc kubenswrapper[4835]: E0201 07:59:47.915867 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:48 crc kubenswrapper[4835]: I0201 07:59:48.567925 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:59:48 crc kubenswrapper[4835]: I0201 07:59:48.568326 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 07:59:48 crc kubenswrapper[4835]: E0201 07:59:48.568706 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:59:48 crc kubenswrapper[4835]: I0201 07:59:48.929197 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" exitCode=1 Feb 01 07:59:48 crc kubenswrapper[4835]: I0201 07:59:48.929279 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d"} Feb 01 07:59:48 crc kubenswrapper[4835]: I0201 07:59:48.929357 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:59:48 crc kubenswrapper[4835]: I0201 07:59:48.931237 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:48 crc kubenswrapper[4835]: I0201 07:59:48.931294 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 07:59:48 crc kubenswrapper[4835]: E0201 07:59:48.932211 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:49 crc kubenswrapper[4835]: I0201 07:59:49.019245 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:59:49 crc kubenswrapper[4835]: I0201 07:59:49.941142 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:49 crc kubenswrapper[4835]: I0201 07:59:49.941175 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 07:59:49 crc kubenswrapper[4835]: E0201 07:59:49.941521 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.567610 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 07:59:50 crc kubenswrapper[4835]: E0201 07:59:50.568000 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.568028 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.568139 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.568182 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.568302 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.568398 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.953504 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b"} Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.953980 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.953999 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 07:59:50 crc kubenswrapper[4835]: E0201 07:59:50.954215 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:51 crc kubenswrapper[4835]: E0201 07:59:51.358675 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973114 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" exitCode=1 Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973170 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" exitCode=1 Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973181 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" exitCode=1 Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973180 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b"} Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973189 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" exitCode=1 Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973248 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b"} Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973265 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be"} Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973277 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90"} Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973301 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.974102 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.974249 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.974310 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.974476 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.974563 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 07:59:51 crc kubenswrapper[4835]: E0201 07:59:51.975090 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.021023 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.058937 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.098855 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.566946 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.567031 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.567134 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 07:59:52 crc kubenswrapper[4835]: E0201 07:59:52.567461 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.567496 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.567577 4835 scope.go:117] "RemoveContainer" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.567690 4835 scope.go:117] "RemoveContainer" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" Feb 01 07:59:52 crc kubenswrapper[4835]: E0201 07:59:52.568021 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.991612 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.991724 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.991766 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.991873 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.991932 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 07:59:52 crc kubenswrapper[4835]: E0201 07:59:52.992435 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.609006 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4jnp6"] Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.613042 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.629480 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4jnp6"] Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.692146 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8grpf\" (UniqueName: \"kubernetes.io/projected/51ab27b9-c1c7-48b0-a4b0-185857b275e3-kube-api-access-8grpf\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.692589 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-utilities\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.692749 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-catalog-content\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.793675 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-utilities\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.793756 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-catalog-content\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.793892 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8grpf\" (UniqueName: \"kubernetes.io/projected/51ab27b9-c1c7-48b0-a4b0-185857b275e3-kube-api-access-8grpf\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.794561 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-catalog-content\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.794592 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-utilities\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.816844 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8grpf\" (UniqueName: \"kubernetes.io/projected/51ab27b9-c1c7-48b0-a4b0-185857b275e3-kube-api-access-8grpf\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.968944 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:56 crc kubenswrapper[4835]: I0201 07:59:56.241739 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4jnp6"] Feb 01 07:59:57 crc kubenswrapper[4835]: I0201 07:59:57.034488 4835 generic.go:334] "Generic (PLEG): container finished" podID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerID="96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290" exitCode=0 Feb 01 07:59:57 crc kubenswrapper[4835]: I0201 07:59:57.034580 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jnp6" event={"ID":"51ab27b9-c1c7-48b0-a4b0-185857b275e3","Type":"ContainerDied","Data":"96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290"} Feb 01 07:59:57 crc kubenswrapper[4835]: I0201 07:59:57.034841 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jnp6" event={"ID":"51ab27b9-c1c7-48b0-a4b0-185857b275e3","Type":"ContainerStarted","Data":"34cf027c170ecf8c1066f7cff2eb82430d01141b4a84987b234957bbdf3450df"} Feb 01 07:59:58 crc kubenswrapper[4835]: I0201 07:59:58.042332 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jnp6" event={"ID":"51ab27b9-c1c7-48b0-a4b0-185857b275e3","Type":"ContainerStarted","Data":"c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0"} Feb 01 07:59:59 crc kubenswrapper[4835]: I0201 07:59:59.055032 4835 generic.go:334] "Generic (PLEG): container finished" podID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerID="c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0" exitCode=0 Feb 01 07:59:59 crc kubenswrapper[4835]: I0201 07:59:59.055079 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jnp6" event={"ID":"51ab27b9-c1c7-48b0-a4b0-185857b275e3","Type":"ContainerDied","Data":"c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0"} Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.070096 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jnp6" event={"ID":"51ab27b9-c1c7-48b0-a4b0-185857b275e3","Type":"ContainerStarted","Data":"eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8"} Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.104226 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4jnp6" podStartSLOduration=2.688546499 podStartE2EDuration="5.104199922s" podCreationTimestamp="2026-02-01 07:59:55 +0000 UTC" firstStartedPulling="2026-02-01 07:59:57.036823892 +0000 UTC m=+2270.157260336" lastFinishedPulling="2026-02-01 07:59:59.452477305 +0000 UTC m=+2272.572913759" observedRunningTime="2026-02-01 08:00:00.099247723 +0000 UTC m=+2273.219684247" watchObservedRunningTime="2026-02-01 08:00:00.104199922 +0000 UTC m=+2273.224636396" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.166147 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t"] Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.168583 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.171837 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.171860 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.181809 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t"] Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.259614 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0334d6e7-9af5-4634-ab64-18017a9439df-config-volume\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.259676 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnstf\" (UniqueName: \"kubernetes.io/projected/0334d6e7-9af5-4634-ab64-18017a9439df-kube-api-access-qnstf\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.259870 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0334d6e7-9af5-4634-ab64-18017a9439df-secret-volume\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.360882 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0334d6e7-9af5-4634-ab64-18017a9439df-config-volume\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.360923 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnstf\" (UniqueName: \"kubernetes.io/projected/0334d6e7-9af5-4634-ab64-18017a9439df-kube-api-access-qnstf\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.360991 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0334d6e7-9af5-4634-ab64-18017a9439df-secret-volume\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.362121 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0334d6e7-9af5-4634-ab64-18017a9439df-config-volume\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.367645 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0334d6e7-9af5-4634-ab64-18017a9439df-secret-volume\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.386232 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnstf\" (UniqueName: \"kubernetes.io/projected/0334d6e7-9af5-4634-ab64-18017a9439df-kube-api-access-qnstf\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.502387 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.771337 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t"] Feb 01 08:00:01 crc kubenswrapper[4835]: I0201 08:00:01.084105 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" event={"ID":"0334d6e7-9af5-4634-ab64-18017a9439df","Type":"ContainerStarted","Data":"738d34fc9bdc4a803474eb920a4f2a18316b6617e4c12f62696f18a290c74eb7"} Feb 01 08:00:01 crc kubenswrapper[4835]: I0201 08:00:01.084145 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" event={"ID":"0334d6e7-9af5-4634-ab64-18017a9439df","Type":"ContainerStarted","Data":"4ffea0e07b972060df811e5fc9d81f16f61f3136c1a98aea12ad65dab87c22b9"} Feb 01 08:00:01 crc kubenswrapper[4835]: I0201 08:00:01.109165 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" podStartSLOduration=1.109133597 podStartE2EDuration="1.109133597s" podCreationTimestamp="2026-02-01 08:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 08:00:01.100598284 +0000 UTC m=+2274.221034718" watchObservedRunningTime="2026-02-01 08:00:01.109133597 +0000 UTC m=+2274.229570051" Feb 01 08:00:01 crc kubenswrapper[4835]: I0201 08:00:01.566515 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:00:01 crc kubenswrapper[4835]: I0201 08:00:01.567006 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:00:01 crc kubenswrapper[4835]: I0201 08:00:01.567033 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:00:01 crc kubenswrapper[4835]: E0201 08:00:01.567189 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:00:01 crc kubenswrapper[4835]: E0201 08:00:01.567304 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:00:02 crc kubenswrapper[4835]: I0201 08:00:02.095184 4835 generic.go:334] "Generic (PLEG): container finished" podID="0334d6e7-9af5-4634-ab64-18017a9439df" containerID="738d34fc9bdc4a803474eb920a4f2a18316b6617e4c12f62696f18a290c74eb7" exitCode=0 Feb 01 08:00:02 crc kubenswrapper[4835]: I0201 08:00:02.095232 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" event={"ID":"0334d6e7-9af5-4634-ab64-18017a9439df","Type":"ContainerDied","Data":"738d34fc9bdc4a803474eb920a4f2a18316b6617e4c12f62696f18a290c74eb7"} Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.425282 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.509845 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnstf\" (UniqueName: \"kubernetes.io/projected/0334d6e7-9af5-4634-ab64-18017a9439df-kube-api-access-qnstf\") pod \"0334d6e7-9af5-4634-ab64-18017a9439df\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.509936 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0334d6e7-9af5-4634-ab64-18017a9439df-config-volume\") pod \"0334d6e7-9af5-4634-ab64-18017a9439df\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.509976 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0334d6e7-9af5-4634-ab64-18017a9439df-secret-volume\") pod \"0334d6e7-9af5-4634-ab64-18017a9439df\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.510943 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0334d6e7-9af5-4634-ab64-18017a9439df-config-volume" (OuterVolumeSpecName: "config-volume") pod "0334d6e7-9af5-4634-ab64-18017a9439df" (UID: "0334d6e7-9af5-4634-ab64-18017a9439df"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.515139 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0334d6e7-9af5-4634-ab64-18017a9439df-kube-api-access-qnstf" (OuterVolumeSpecName: "kube-api-access-qnstf") pod "0334d6e7-9af5-4634-ab64-18017a9439df" (UID: "0334d6e7-9af5-4634-ab64-18017a9439df"). InnerVolumeSpecName "kube-api-access-qnstf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.515191 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0334d6e7-9af5-4634-ab64-18017a9439df-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0334d6e7-9af5-4634-ab64-18017a9439df" (UID: "0334d6e7-9af5-4634-ab64-18017a9439df"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.567059 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.567108 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:00:03 crc kubenswrapper[4835]: E0201 08:00:03.567564 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.611466 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnstf\" (UniqueName: \"kubernetes.io/projected/0334d6e7-9af5-4634-ab64-18017a9439df-kube-api-access-qnstf\") on node \"crc\" DevicePath \"\"" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.611492 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0334d6e7-9af5-4634-ab64-18017a9439df-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.611502 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0334d6e7-9af5-4634-ab64-18017a9439df-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 08:00:04 crc kubenswrapper[4835]: I0201 08:00:04.110038 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" event={"ID":"0334d6e7-9af5-4634-ab64-18017a9439df","Type":"ContainerDied","Data":"4ffea0e07b972060df811e5fc9d81f16f61f3136c1a98aea12ad65dab87c22b9"} Feb 01 08:00:04 crc kubenswrapper[4835]: I0201 08:00:04.110085 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ffea0e07b972060df811e5fc9d81f16f61f3136c1a98aea12ad65dab87c22b9" Feb 01 08:00:04 crc kubenswrapper[4835]: I0201 08:00:04.110136 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:04 crc kubenswrapper[4835]: I0201 08:00:04.515871 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x"] Feb 01 08:00:04 crc kubenswrapper[4835]: I0201 08:00:04.523535 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x"] Feb 01 08:00:05 crc kubenswrapper[4835]: I0201 08:00:05.566737 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 08:00:05 crc kubenswrapper[4835]: I0201 08:00:05.567131 4835 scope.go:117] "RemoveContainer" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" Feb 01 08:00:05 crc kubenswrapper[4835]: I0201 08:00:05.567214 4835 scope.go:117] "RemoveContainer" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" Feb 01 08:00:05 crc kubenswrapper[4835]: I0201 08:00:05.578090 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="137b200e-5dcd-43c9-82e2-332071d84cb0" path="/var/lib/kubelet/pods/137b200e-5dcd-43c9-82e2-332071d84cb0/volumes" Feb 01 08:00:05 crc kubenswrapper[4835]: I0201 08:00:05.969780 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 08:00:05 crc kubenswrapper[4835]: I0201 08:00:05.969834 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 08:00:06 crc kubenswrapper[4835]: I0201 08:00:06.020472 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 08:00:06 crc kubenswrapper[4835]: I0201 08:00:06.142792 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895"} Feb 01 08:00:06 crc kubenswrapper[4835]: I0201 08:00:06.145635 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6"} Feb 01 08:00:06 crc kubenswrapper[4835]: I0201 08:00:06.181524 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 08:00:06 crc kubenswrapper[4835]: I0201 08:00:06.250730 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4jnp6"] Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.162318 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" exitCode=1 Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.162370 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" exitCode=1 Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.162390 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" exitCode=1 Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.162443 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9"} Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.162506 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895"} Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.162527 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6"} Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.162542 4835 scope.go:117] "RemoveContainer" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.163352 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.163518 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.163718 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:00:07 crc kubenswrapper[4835]: E0201 08:00:07.164295 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.213649 4835 scope.go:117] "RemoveContainer" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.262829 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.574345 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.574546 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.574823 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.185351 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.185456 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.185559 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:00:08 crc kubenswrapper[4835]: E0201 08:00:08.186012 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.204232 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" exitCode=1 Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.204453 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4jnp6" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="registry-server" containerID="cri-o://eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8" gracePeriod=2 Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.204712 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da"} Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.205642 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c"} Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.205778 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792"} Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.206466 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.207463 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:00:08 crc kubenswrapper[4835]: E0201 08:00:08.209088 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.568201 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.568651 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.568700 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.568837 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.568955 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.736640 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 08:00:08 crc kubenswrapper[4835]: E0201 08:00:08.799779 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.898483 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-catalog-content\") pod \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.898541 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8grpf\" (UniqueName: \"kubernetes.io/projected/51ab27b9-c1c7-48b0-a4b0-185857b275e3-kube-api-access-8grpf\") pod \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.898621 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-utilities\") pod \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.899875 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-utilities" (OuterVolumeSpecName: "utilities") pod "51ab27b9-c1c7-48b0-a4b0-185857b275e3" (UID: "51ab27b9-c1c7-48b0-a4b0-185857b275e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.905734 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ab27b9-c1c7-48b0-a4b0-185857b275e3-kube-api-access-8grpf" (OuterVolumeSpecName: "kube-api-access-8grpf") pod "51ab27b9-c1c7-48b0-a4b0-185857b275e3" (UID: "51ab27b9-c1c7-48b0-a4b0-185857b275e3"). InnerVolumeSpecName "kube-api-access-8grpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.952814 4835 scope.go:117] "RemoveContainer" containerID="98c793df94b793188e86124f6ff1a8161f18d725c6666c0e72eb3d6113d10246" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.955881 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51ab27b9-c1c7-48b0-a4b0-185857b275e3" (UID: "51ab27b9-c1c7-48b0-a4b0-185857b275e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.000545 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.000575 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8grpf\" (UniqueName: \"kubernetes.io/projected/51ab27b9-c1c7-48b0-a4b0-185857b275e3-kube-api-access-8grpf\") on node \"crc\" DevicePath \"\"" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.000584 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.217478 4835 generic.go:334] "Generic (PLEG): container finished" podID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerID="eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8" exitCode=0 Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.217555 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jnp6" event={"ID":"51ab27b9-c1c7-48b0-a4b0-185857b275e3","Type":"ContainerDied","Data":"eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8"} Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.217598 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.217973 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jnp6" event={"ID":"51ab27b9-c1c7-48b0-a4b0-185857b275e3","Type":"ContainerDied","Data":"34cf027c170ecf8c1066f7cff2eb82430d01141b4a84987b234957bbdf3450df"} Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.218032 4835 scope.go:117] "RemoveContainer" containerID="eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.230073 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" exitCode=1 Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.230125 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" exitCode=1 Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.230157 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da"} Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.230222 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c"} Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.231488 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.231693 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.231993 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:00:09 crc kubenswrapper[4835]: E0201 08:00:09.233134 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.244689 4835 scope.go:117] "RemoveContainer" containerID="c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.247816 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4"} Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.248926 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.249145 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.249449 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.249608 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:00:09 crc kubenswrapper[4835]: E0201 08:00:09.250126 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.280201 4835 scope.go:117] "RemoveContainer" containerID="96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.314534 4835 scope.go:117] "RemoveContainer" containerID="eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8" Feb 01 08:00:09 crc kubenswrapper[4835]: E0201 08:00:09.316181 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8\": container with ID starting with eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8 not found: ID does not exist" containerID="eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.316247 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8"} err="failed to get container status \"eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8\": rpc error: code = NotFound desc = could not find container \"eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8\": container with ID starting with eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8 not found: ID does not exist" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.316292 4835 scope.go:117] "RemoveContainer" containerID="c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0" Feb 01 08:00:09 crc kubenswrapper[4835]: E0201 08:00:09.316712 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0\": container with ID starting with c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0 not found: ID does not exist" containerID="c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.316822 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0"} err="failed to get container status \"c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0\": rpc error: code = NotFound desc = could not find container \"c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0\": container with ID starting with c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0 not found: ID does not exist" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.316916 4835 scope.go:117] "RemoveContainer" containerID="96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290" Feb 01 08:00:09 crc kubenswrapper[4835]: E0201 08:00:09.317704 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290\": container with ID starting with 96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290 not found: ID does not exist" containerID="96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.317763 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290"} err="failed to get container status \"96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290\": rpc error: code = NotFound desc = could not find container \"96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290\": container with ID starting with 96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290 not found: ID does not exist" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.317799 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.318515 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4jnp6"] Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.336153 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4jnp6"] Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.364935 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.584788 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" path="/var/lib/kubelet/pods/51ab27b9-c1c7-48b0-a4b0-185857b275e3/volumes" Feb 01 08:00:10 crc kubenswrapper[4835]: I0201 08:00:10.266931 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:00:10 crc kubenswrapper[4835]: I0201 08:00:10.267007 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:00:10 crc kubenswrapper[4835]: I0201 08:00:10.267125 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:00:10 crc kubenswrapper[4835]: E0201 08:00:10.267442 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:00:13 crc kubenswrapper[4835]: I0201 08:00:13.566839 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:00:13 crc kubenswrapper[4835]: I0201 08:00:13.567443 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:00:13 crc kubenswrapper[4835]: E0201 08:00:13.567923 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:00:14 crc kubenswrapper[4835]: I0201 08:00:14.568664 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:00:14 crc kubenswrapper[4835]: E0201 08:00:14.568993 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:00:15 crc kubenswrapper[4835]: I0201 08:00:15.324147 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="6aaadf97ef22242cf5b15148b8cd42d71eb7c275654a87f6591085d77d846827" exitCode=1 Feb 01 08:00:15 crc kubenswrapper[4835]: I0201 08:00:15.324210 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"6aaadf97ef22242cf5b15148b8cd42d71eb7c275654a87f6591085d77d846827"} Feb 01 08:00:15 crc kubenswrapper[4835]: I0201 08:00:15.324653 4835 scope.go:117] "RemoveContainer" containerID="6b13e362c79ee7da812063d3725213416d72ec13aecff7de5df3b32c3456d592" Feb 01 08:00:15 crc kubenswrapper[4835]: I0201 08:00:15.325250 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:00:15 crc kubenswrapper[4835]: I0201 08:00:15.325307 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:00:15 crc kubenswrapper[4835]: I0201 08:00:15.325328 4835 scope.go:117] "RemoveContainer" containerID="6aaadf97ef22242cf5b15148b8cd42d71eb7c275654a87f6591085d77d846827" Feb 01 08:00:15 crc kubenswrapper[4835]: I0201 08:00:15.325390 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:00:15 crc kubenswrapper[4835]: E0201 08:00:15.325672 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:00:16 crc kubenswrapper[4835]: I0201 08:00:16.566808 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 08:00:16 crc kubenswrapper[4835]: I0201 08:00:16.566851 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:00:16 crc kubenswrapper[4835]: E0201 08:00:16.567209 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:00:21 crc kubenswrapper[4835]: I0201 08:00:21.568003 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:00:21 crc kubenswrapper[4835]: I0201 08:00:21.568659 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:00:21 crc kubenswrapper[4835]: I0201 08:00:21.568923 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:00:21 crc kubenswrapper[4835]: E0201 08:00:21.569645 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:00:22 crc kubenswrapper[4835]: I0201 08:00:22.566799 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:00:22 crc kubenswrapper[4835]: I0201 08:00:22.567181 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:00:22 crc kubenswrapper[4835]: I0201 08:00:22.567279 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:00:22 crc kubenswrapper[4835]: I0201 08:00:22.567322 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:00:22 crc kubenswrapper[4835]: E0201 08:00:22.567740 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:00:25 crc kubenswrapper[4835]: I0201 08:00:25.567446 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:00:25 crc kubenswrapper[4835]: E0201 08:00:25.567819 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:00:28 crc kubenswrapper[4835]: I0201 08:00:28.567488 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:00:28 crc kubenswrapper[4835]: I0201 08:00:28.567749 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:00:28 crc kubenswrapper[4835]: E0201 08:00:28.568218 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:00:29 crc kubenswrapper[4835]: I0201 08:00:29.567010 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 08:00:29 crc kubenswrapper[4835]: I0201 08:00:29.567357 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:00:29 crc kubenswrapper[4835]: I0201 08:00:29.567441 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:00:29 crc kubenswrapper[4835]: I0201 08:00:29.567641 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:00:29 crc kubenswrapper[4835]: E0201 08:00:29.567642 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:00:29 crc kubenswrapper[4835]: I0201 08:00:29.567700 4835 scope.go:117] "RemoveContainer" containerID="6aaadf97ef22242cf5b15148b8cd42d71eb7c275654a87f6591085d77d846827" Feb 01 08:00:29 crc kubenswrapper[4835]: I0201 08:00:29.567883 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:00:29 crc kubenswrapper[4835]: E0201 08:00:29.568932 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.544837 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" exitCode=1 Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.544885 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4"} Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.545359 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.546576 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.546723 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.546781 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.546921 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.546993 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:00:31 crc kubenswrapper[4835]: E0201 08:00:31.547612 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:00:36 crc kubenswrapper[4835]: I0201 08:00:36.566924 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:00:36 crc kubenswrapper[4835]: I0201 08:00:36.567318 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:00:36 crc kubenswrapper[4835]: I0201 08:00:36.567467 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:00:36 crc kubenswrapper[4835]: E0201 08:00:36.567928 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:00:38 crc kubenswrapper[4835]: I0201 08:00:38.567364 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:00:38 crc kubenswrapper[4835]: E0201 08:00:38.568359 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:00:39 crc kubenswrapper[4835]: I0201 08:00:39.567026 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:00:39 crc kubenswrapper[4835]: I0201 08:00:39.567395 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:00:39 crc kubenswrapper[4835]: E0201 08:00:39.567766 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:00:40 crc kubenswrapper[4835]: I0201 08:00:40.566827 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:00:40 crc kubenswrapper[4835]: I0201 08:00:40.566900 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:00:40 crc kubenswrapper[4835]: I0201 08:00:40.566922 4835 scope.go:117] "RemoveContainer" containerID="6aaadf97ef22242cf5b15148b8cd42d71eb7c275654a87f6591085d77d846827" Feb 01 08:00:40 crc kubenswrapper[4835]: I0201 08:00:40.566980 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:00:40 crc kubenswrapper[4835]: E0201 08:00:40.767147 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:00:41 crc kubenswrapper[4835]: I0201 08:00:41.567350 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 08:00:41 crc kubenswrapper[4835]: I0201 08:00:41.567394 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:00:41 crc kubenswrapper[4835]: E0201 08:00:41.567791 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:00:41 crc kubenswrapper[4835]: I0201 08:00:41.649196 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d"} Feb 01 08:00:41 crc kubenswrapper[4835]: I0201 08:00:41.649952 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:00:41 crc kubenswrapper[4835]: I0201 08:00:41.650028 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:00:41 crc kubenswrapper[4835]: I0201 08:00:41.650140 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:00:41 crc kubenswrapper[4835]: E0201 08:00:41.650554 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:00:45 crc kubenswrapper[4835]: I0201 08:00:45.567896 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:00:45 crc kubenswrapper[4835]: I0201 08:00:45.568038 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:00:45 crc kubenswrapper[4835]: I0201 08:00:45.568084 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:00:45 crc kubenswrapper[4835]: I0201 08:00:45.568204 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:00:45 crc kubenswrapper[4835]: I0201 08:00:45.568271 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:00:45 crc kubenswrapper[4835]: E0201 08:00:45.568977 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:00:47 crc kubenswrapper[4835]: I0201 08:00:47.580157 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:00:47 crc kubenswrapper[4835]: I0201 08:00:47.580291 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:00:47 crc kubenswrapper[4835]: I0201 08:00:47.580528 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:00:47 crc kubenswrapper[4835]: E0201 08:00:47.581068 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:00:49 crc kubenswrapper[4835]: I0201 08:00:49.567002 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:00:49 crc kubenswrapper[4835]: E0201 08:00:49.567882 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:00:50 crc kubenswrapper[4835]: I0201 08:00:50.567642 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:00:50 crc kubenswrapper[4835]: I0201 08:00:50.568731 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:00:50 crc kubenswrapper[4835]: E0201 08:00:50.569378 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:00:52 crc kubenswrapper[4835]: I0201 08:00:52.567596 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:00:52 crc kubenswrapper[4835]: I0201 08:00:52.567724 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:00:52 crc kubenswrapper[4835]: I0201 08:00:52.567901 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:00:52 crc kubenswrapper[4835]: E0201 08:00:52.568363 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:00:55 crc kubenswrapper[4835]: I0201 08:00:55.282121 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:00:55 crc kubenswrapper[4835]: E0201 08:00:55.282393 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:00:55 crc kubenswrapper[4835]: E0201 08:00:55.284215 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:02:57.284178525 +0000 UTC m=+2450.404614999 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:00:56 crc kubenswrapper[4835]: I0201 08:00:56.566987 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 08:00:56 crc kubenswrapper[4835]: I0201 08:00:56.567022 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:00:56 crc kubenswrapper[4835]: E0201 08:00:56.839759 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:00:57 crc kubenswrapper[4835]: I0201 08:00:57.830949 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"62b6a5cb54d4a51567343f41930b30b226710837af82d44b899bbf60472b25a2"} Feb 01 08:00:57 crc kubenswrapper[4835]: I0201 08:00:57.831267 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:00:57 crc kubenswrapper[4835]: I0201 08:00:57.832057 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:00:57 crc kubenswrapper[4835]: E0201 08:00:57.832399 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:00:58 crc kubenswrapper[4835]: I0201 08:00:58.842765 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:00:58 crc kubenswrapper[4835]: E0201 08:00:58.843353 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:00:59 crc kubenswrapper[4835]: I0201 08:00:59.567808 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:00:59 crc kubenswrapper[4835]: I0201 08:00:59.567969 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:00:59 crc kubenswrapper[4835]: I0201 08:00:59.568177 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:00:59 crc kubenswrapper[4835]: E0201 08:00:59.568802 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.167154 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-cron-29498881-kfzg5"] Feb 01 08:01:00 crc kubenswrapper[4835]: E0201 08:01:00.167790 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="extract-content" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.167821 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="extract-content" Feb 01 08:01:00 crc kubenswrapper[4835]: E0201 08:01:00.167863 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="extract-utilities" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.167879 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="extract-utilities" Feb 01 08:01:00 crc kubenswrapper[4835]: E0201 08:01:00.167936 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="registry-server" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.167953 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="registry-server" Feb 01 08:01:00 crc kubenswrapper[4835]: E0201 08:01:00.167975 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0334d6e7-9af5-4634-ab64-18017a9439df" containerName="collect-profiles" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.167989 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0334d6e7-9af5-4634-ab64-18017a9439df" containerName="collect-profiles" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.168458 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="0334d6e7-9af5-4634-ab64-18017a9439df" containerName="collect-profiles" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.168507 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="registry-server" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.169579 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.199583 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-cron-29498881-kfzg5"] Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.268952 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-config-data\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.269051 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-fernet-keys\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.269080 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbkj5\" (UniqueName: \"kubernetes.io/projected/f0c36c8d-897d-4b88-a236-44fe0d511c4e-kube-api-access-pbkj5\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.371171 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-config-data\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.371378 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-fernet-keys\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.371478 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbkj5\" (UniqueName: \"kubernetes.io/projected/f0c36c8d-897d-4b88-a236-44fe0d511c4e-kube-api-access-pbkj5\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.379355 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-config-data\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.379803 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-fernet-keys\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.389367 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbkj5\" (UniqueName: \"kubernetes.io/projected/f0c36c8d-897d-4b88-a236-44fe0d511c4e-kube-api-access-pbkj5\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.501891 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.567459 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.567933 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.567978 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.568099 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.568172 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:01:00 crc kubenswrapper[4835]: E0201 08:01:00.569036 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.800297 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-cron-29498881-kfzg5"] Feb 01 08:01:00 crc kubenswrapper[4835]: W0201 08:01:00.814612 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0c36c8d_897d_4b88_a236_44fe0d511c4e.slice/crio-0635907b63b582e3196010ebff42cbc35db4a0f30b1cfcdd735453c1a8974860 WatchSource:0}: Error finding container 0635907b63b582e3196010ebff42cbc35db4a0f30b1cfcdd735453c1a8974860: Status 404 returned error can't find the container with id 0635907b63b582e3196010ebff42cbc35db4a0f30b1cfcdd735453c1a8974860 Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.861272 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" event={"ID":"f0c36c8d-897d-4b88-a236-44fe0d511c4e","Type":"ContainerStarted","Data":"0635907b63b582e3196010ebff42cbc35db4a0f30b1cfcdd735453c1a8974860"} Feb 01 08:01:01 crc kubenswrapper[4835]: I0201 08:01:01.021596 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:01 crc kubenswrapper[4835]: I0201 08:01:01.566921 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:01:01 crc kubenswrapper[4835]: I0201 08:01:01.566968 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:01:01 crc kubenswrapper[4835]: E0201 08:01:01.567473 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:01:01 crc kubenswrapper[4835]: I0201 08:01:01.872150 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" event={"ID":"f0c36c8d-897d-4b88-a236-44fe0d511c4e","Type":"ContainerStarted","Data":"a8ce14dd07f9be90e149056ac48a9e7888229aebe9f4685bf1cce84f193f3985"} Feb 01 08:01:01 crc kubenswrapper[4835]: I0201 08:01:01.897081 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" podStartSLOduration=1.897061566 podStartE2EDuration="1.897061566s" podCreationTimestamp="2026-02-01 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 08:01:01.892039035 +0000 UTC m=+2335.012475469" watchObservedRunningTime="2026-02-01 08:01:01.897061566 +0000 UTC m=+2335.017498000" Feb 01 08:01:02 crc kubenswrapper[4835]: I0201 08:01:02.884593 4835 generic.go:334] "Generic (PLEG): container finished" podID="f0c36c8d-897d-4b88-a236-44fe0d511c4e" containerID="a8ce14dd07f9be90e149056ac48a9e7888229aebe9f4685bf1cce84f193f3985" exitCode=0 Feb 01 08:01:02 crc kubenswrapper[4835]: I0201 08:01:02.884690 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" event={"ID":"f0c36c8d-897d-4b88-a236-44fe0d511c4e","Type":"ContainerDied","Data":"a8ce14dd07f9be90e149056ac48a9e7888229aebe9f4685bf1cce84f193f3985"} Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.031512 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.170581 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:04 crc kubenswrapper[4835]: E0201 08:01:04.265066 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.337264 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbkj5\" (UniqueName: \"kubernetes.io/projected/f0c36c8d-897d-4b88-a236-44fe0d511c4e-kube-api-access-pbkj5\") pod \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.337380 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-fernet-keys\") pod \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.337460 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-config-data\") pod \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.343324 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c36c8d-897d-4b88-a236-44fe0d511c4e-kube-api-access-pbkj5" (OuterVolumeSpecName: "kube-api-access-pbkj5") pod "f0c36c8d-897d-4b88-a236-44fe0d511c4e" (UID: "f0c36c8d-897d-4b88-a236-44fe0d511c4e"). InnerVolumeSpecName "kube-api-access-pbkj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.345337 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f0c36c8d-897d-4b88-a236-44fe0d511c4e" (UID: "f0c36c8d-897d-4b88-a236-44fe0d511c4e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.399690 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-config-data" (OuterVolumeSpecName: "config-data") pod "f0c36c8d-897d-4b88-a236-44fe0d511c4e" (UID: "f0c36c8d-897d-4b88-a236-44fe0d511c4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.439578 4835 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.439628 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.439649 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbkj5\" (UniqueName: \"kubernetes.io/projected/f0c36c8d-897d-4b88-a236-44fe0d511c4e-kube-api-access-pbkj5\") on node \"crc\" DevicePath \"\"" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.567331 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:01:04 crc kubenswrapper[4835]: E0201 08:01:04.567868 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.904220 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.904269 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.904269 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" event={"ID":"f0c36c8d-897d-4b88-a236-44fe0d511c4e","Type":"ContainerDied","Data":"0635907b63b582e3196010ebff42cbc35db4a0f30b1cfcdd735453c1a8974860"} Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.904347 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0635907b63b582e3196010ebff42cbc35db4a0f30b1cfcdd735453c1a8974860" Feb 01 08:01:05 crc kubenswrapper[4835]: I0201 08:01:05.022152 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.022039 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.022489 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.023042 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"62b6a5cb54d4a51567343f41930b30b226710837af82d44b899bbf60472b25a2"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.023066 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.023102 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://62b6a5cb54d4a51567343f41930b30b226710837af82d44b899bbf60472b25a2" gracePeriod=30 Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.026633 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:07 crc kubenswrapper[4835]: E0201 08:01:07.355841 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.578737 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.579349 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.579669 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:01:07 crc kubenswrapper[4835]: E0201 08:01:07.580449 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.934992 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="62b6a5cb54d4a51567343f41930b30b226710837af82d44b899bbf60472b25a2" exitCode=0 Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.935040 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"62b6a5cb54d4a51567343f41930b30b226710837af82d44b899bbf60472b25a2"} Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.935070 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1"} Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.935088 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.935315 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.935882 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:01:07 crc kubenswrapper[4835]: E0201 08:01:07.936210 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:01:08 crc kubenswrapper[4835]: I0201 08:01:08.952192 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:01:08 crc kubenswrapper[4835]: E0201 08:01:08.952563 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:01:10 crc kubenswrapper[4835]: I0201 08:01:10.567578 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:01:10 crc kubenswrapper[4835]: I0201 08:01:10.567988 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:01:10 crc kubenswrapper[4835]: I0201 08:01:10.568189 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:01:10 crc kubenswrapper[4835]: E0201 08:01:10.568740 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:11 crc kubenswrapper[4835]: I0201 08:01:11.567123 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:01:11 crc kubenswrapper[4835]: I0201 08:01:11.567564 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:01:11 crc kubenswrapper[4835]: I0201 08:01:11.567600 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:01:11 crc kubenswrapper[4835]: I0201 08:01:11.567696 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:01:11 crc kubenswrapper[4835]: I0201 08:01:11.567750 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:01:11 crc kubenswrapper[4835]: E0201 08:01:11.568206 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:01:12 crc kubenswrapper[4835]: I0201 08:01:12.566886 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:01:12 crc kubenswrapper[4835]: I0201 08:01:12.566930 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:01:12 crc kubenswrapper[4835]: E0201 08:01:12.567398 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:01:13 crc kubenswrapper[4835]: I0201 08:01:13.022889 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:15 crc kubenswrapper[4835]: I0201 08:01:15.021861 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:16 crc kubenswrapper[4835]: I0201 08:01:16.021931 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:17 crc kubenswrapper[4835]: I0201 08:01:17.048364 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="811dcfbbfbce2457a26cf2cfd3d7f241f223d0bd48897b5e6e54984050426b01" exitCode=1 Feb 01 08:01:17 crc kubenswrapper[4835]: I0201 08:01:17.048455 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"811dcfbbfbce2457a26cf2cfd3d7f241f223d0bd48897b5e6e54984050426b01"} Feb 01 08:01:17 crc kubenswrapper[4835]: I0201 08:01:17.049784 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:01:17 crc kubenswrapper[4835]: I0201 08:01:17.049942 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:01:17 crc kubenswrapper[4835]: I0201 08:01:17.049988 4835 scope.go:117] "RemoveContainer" containerID="811dcfbbfbce2457a26cf2cfd3d7f241f223d0bd48897b5e6e54984050426b01" Feb 01 08:01:17 crc kubenswrapper[4835]: I0201 08:01:17.050128 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:01:17 crc kubenswrapper[4835]: E0201 08:01:17.245808 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:18 crc kubenswrapper[4835]: I0201 08:01:18.074719 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"b617a357ad18b022ef2b099085b4201aaae89a1fe136b06e63fb522686c13160"} Feb 01 08:01:18 crc kubenswrapper[4835]: I0201 08:01:18.075827 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:01:18 crc kubenswrapper[4835]: I0201 08:01:18.075940 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:01:18 crc kubenswrapper[4835]: I0201 08:01:18.076145 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:01:18 crc kubenswrapper[4835]: E0201 08:01:18.076626 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:18 crc kubenswrapper[4835]: I0201 08:01:18.567318 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:01:18 crc kubenswrapper[4835]: E0201 08:01:18.567689 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:01:19 crc kubenswrapper[4835]: I0201 08:01:19.022157 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:19 crc kubenswrapper[4835]: I0201 08:01:19.022266 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:01:19 crc kubenswrapper[4835]: I0201 08:01:19.023872 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:01:19 crc kubenswrapper[4835]: I0201 08:01:19.023929 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:01:19 crc kubenswrapper[4835]: I0201 08:01:19.023977 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" gracePeriod=30 Feb 01 08:01:19 crc kubenswrapper[4835]: I0201 08:01:19.025301 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:19 crc kubenswrapper[4835]: E0201 08:01:19.149848 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:01:20 crc kubenswrapper[4835]: I0201 08:01:20.019583 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.100:8080/healthcheck\": dial tcp 10.217.0.100:8080: connect: connection refused" Feb 01 08:01:20 crc kubenswrapper[4835]: I0201 08:01:20.111532 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" exitCode=0 Feb 01 08:01:20 crc kubenswrapper[4835]: I0201 08:01:20.111571 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1"} Feb 01 08:01:20 crc kubenswrapper[4835]: I0201 08:01:20.111606 4835 scope.go:117] "RemoveContainer" containerID="62b6a5cb54d4a51567343f41930b30b226710837af82d44b899bbf60472b25a2" Feb 01 08:01:20 crc kubenswrapper[4835]: I0201 08:01:20.112709 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:01:20 crc kubenswrapper[4835]: I0201 08:01:20.112771 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:01:20 crc kubenswrapper[4835]: E0201 08:01:20.113259 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:01:22 crc kubenswrapper[4835]: I0201 08:01:22.568572 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:01:22 crc kubenswrapper[4835]: I0201 08:01:22.568978 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:01:22 crc kubenswrapper[4835]: I0201 08:01:22.569164 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:01:22 crc kubenswrapper[4835]: E0201 08:01:22.569674 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:24 crc kubenswrapper[4835]: I0201 08:01:24.569062 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:01:24 crc kubenswrapper[4835]: I0201 08:01:24.569190 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:01:24 crc kubenswrapper[4835]: I0201 08:01:24.569233 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:01:24 crc kubenswrapper[4835]: I0201 08:01:24.569351 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:01:24 crc kubenswrapper[4835]: I0201 08:01:24.569454 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:01:24 crc kubenswrapper[4835]: E0201 08:01:24.570040 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:01:25 crc kubenswrapper[4835]: I0201 08:01:25.567328 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:01:25 crc kubenswrapper[4835]: I0201 08:01:25.567854 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:01:25 crc kubenswrapper[4835]: E0201 08:01:25.568325 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:01:26 crc kubenswrapper[4835]: I0201 08:01:26.174189 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="700112fad0f4ad91d48c44e77419088f8f3cdd322d0db821e4eac71b3672a4b2" exitCode=1 Feb 01 08:01:26 crc kubenswrapper[4835]: I0201 08:01:26.174296 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"700112fad0f4ad91d48c44e77419088f8f3cdd322d0db821e4eac71b3672a4b2"} Feb 01 08:01:26 crc kubenswrapper[4835]: I0201 08:01:26.175940 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:01:26 crc kubenswrapper[4835]: I0201 08:01:26.176153 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:01:26 crc kubenswrapper[4835]: I0201 08:01:26.176386 4835 scope.go:117] "RemoveContainer" containerID="700112fad0f4ad91d48c44e77419088f8f3cdd322d0db821e4eac71b3672a4b2" Feb 01 08:01:26 crc kubenswrapper[4835]: I0201 08:01:26.176479 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:01:26 crc kubenswrapper[4835]: E0201 08:01:26.515005 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:27 crc kubenswrapper[4835]: I0201 08:01:27.200123 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" exitCode=1 Feb 01 08:01:27 crc kubenswrapper[4835]: I0201 08:01:27.200378 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996"} Feb 01 08:01:27 crc kubenswrapper[4835]: I0201 08:01:27.200575 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396"} Feb 01 08:01:27 crc kubenswrapper[4835]: I0201 08:01:27.200609 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:01:27 crc kubenswrapper[4835]: I0201 08:01:27.201392 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:01:27 crc kubenswrapper[4835]: I0201 08:01:27.201558 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:01:27 crc kubenswrapper[4835]: I0201 08:01:27.201767 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:01:27 crc kubenswrapper[4835]: E0201 08:01:27.618226 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.222495 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" exitCode=1 Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.222528 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" exitCode=1 Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.222550 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38"} Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.222656 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112"} Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.222747 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.223353 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.223433 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.223534 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:01:28 crc kubenswrapper[4835]: E0201 08:01:28.223799 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.297832 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:01:29 crc kubenswrapper[4835]: I0201 08:01:29.246579 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:01:29 crc kubenswrapper[4835]: I0201 08:01:29.246742 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:01:29 crc kubenswrapper[4835]: I0201 08:01:29.246994 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:01:29 crc kubenswrapper[4835]: E0201 08:01:29.247638 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:29 crc kubenswrapper[4835]: I0201 08:01:29.568576 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:01:29 crc kubenswrapper[4835]: I0201 08:01:29.568735 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:01:29 crc kubenswrapper[4835]: I0201 08:01:29.568932 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:01:30 crc kubenswrapper[4835]: I0201 08:01:30.268463 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" exitCode=1 Feb 01 08:01:30 crc kubenswrapper[4835]: I0201 08:01:30.268656 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d"} Feb 01 08:01:30 crc kubenswrapper[4835]: I0201 08:01:30.268928 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664"} Feb 01 08:01:30 crc kubenswrapper[4835]: I0201 08:01:30.268948 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315"} Feb 01 08:01:30 crc kubenswrapper[4835]: I0201 08:01:30.268971 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:01:30 crc kubenswrapper[4835]: I0201 08:01:30.269678 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:01:30 crc kubenswrapper[4835]: E0201 08:01:30.270239 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.316789 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" exitCode=1 Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.316842 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" exitCode=1 Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.316872 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d"} Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.316914 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664"} Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.316947 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.317801 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.317942 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.318139 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:01:31 crc kubenswrapper[4835]: E0201 08:01:31.318762 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.393941 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.567598 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:01:31 crc kubenswrapper[4835]: E0201 08:01:31.568132 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:01:35 crc kubenswrapper[4835]: I0201 08:01:35.566614 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:01:35 crc kubenswrapper[4835]: I0201 08:01:35.566965 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:01:35 crc kubenswrapper[4835]: E0201 08:01:35.567315 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:01:36 crc kubenswrapper[4835]: I0201 08:01:36.567808 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:01:36 crc kubenswrapper[4835]: I0201 08:01:36.568229 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:01:36 crc kubenswrapper[4835]: I0201 08:01:36.568259 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:01:36 crc kubenswrapper[4835]: I0201 08:01:36.568339 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:01:36 crc kubenswrapper[4835]: I0201 08:01:36.568382 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:01:36 crc kubenswrapper[4835]: E0201 08:01:36.569028 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:01:39 crc kubenswrapper[4835]: I0201 08:01:39.567354 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:01:39 crc kubenswrapper[4835]: I0201 08:01:39.567816 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:01:39 crc kubenswrapper[4835]: E0201 08:01:39.568180 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:01:41 crc kubenswrapper[4835]: I0201 08:01:41.567250 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:01:41 crc kubenswrapper[4835]: I0201 08:01:41.567636 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:01:41 crc kubenswrapper[4835]: I0201 08:01:41.567752 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:01:41 crc kubenswrapper[4835]: E0201 08:01:41.568076 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:44 crc kubenswrapper[4835]: I0201 08:01:44.567687 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:01:44 crc kubenswrapper[4835]: I0201 08:01:44.567861 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:01:44 crc kubenswrapper[4835]: I0201 08:01:44.568101 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:01:44 crc kubenswrapper[4835]: E0201 08:01:44.568775 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:46 crc kubenswrapper[4835]: I0201 08:01:46.566662 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:01:46 crc kubenswrapper[4835]: E0201 08:01:46.567531 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:01:47 crc kubenswrapper[4835]: I0201 08:01:47.576151 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:01:47 crc kubenswrapper[4835]: I0201 08:01:47.576337 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:01:47 crc kubenswrapper[4835]: I0201 08:01:47.576405 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:01:47 crc kubenswrapper[4835]: I0201 08:01:47.576625 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:01:47 crc kubenswrapper[4835]: I0201 08:01:47.576719 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:01:47 crc kubenswrapper[4835]: E0201 08:01:47.577307 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:01:50 crc kubenswrapper[4835]: I0201 08:01:50.567279 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:01:50 crc kubenswrapper[4835]: I0201 08:01:50.567668 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:01:50 crc kubenswrapper[4835]: E0201 08:01:50.568000 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:01:51 crc kubenswrapper[4835]: I0201 08:01:51.566833 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:01:51 crc kubenswrapper[4835]: I0201 08:01:51.567292 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:01:51 crc kubenswrapper[4835]: E0201 08:01:51.854826 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:01:52 crc kubenswrapper[4835]: I0201 08:01:52.587902 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"c213ea49d7dafd73fec0de5cdaa6e768dd362d5894fea2a2068751be2aed6e08"} Feb 01 08:01:52 crc kubenswrapper[4835]: I0201 08:01:52.588506 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:01:52 crc kubenswrapper[4835]: I0201 08:01:52.589068 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:01:52 crc kubenswrapper[4835]: E0201 08:01:52.589494 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:01:53 crc kubenswrapper[4835]: I0201 08:01:53.596273 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:01:53 crc kubenswrapper[4835]: E0201 08:01:53.596606 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:01:54 crc kubenswrapper[4835]: I0201 08:01:54.567567 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:01:54 crc kubenswrapper[4835]: I0201 08:01:54.567792 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:01:54 crc kubenswrapper[4835]: I0201 08:01:54.568046 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:01:54 crc kubenswrapper[4835]: E0201 08:01:54.568688 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:57 crc kubenswrapper[4835]: I0201 08:01:57.539900 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:57 crc kubenswrapper[4835]: I0201 08:01:57.540781 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:58 crc kubenswrapper[4835]: I0201 08:01:58.568182 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:01:58 crc kubenswrapper[4835]: I0201 08:01:58.568324 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:01:58 crc kubenswrapper[4835]: I0201 08:01:58.568534 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:01:58 crc kubenswrapper[4835]: E0201 08:01:58.569060 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:59 crc kubenswrapper[4835]: I0201 08:01:59.567877 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:01:59 crc kubenswrapper[4835]: E0201 08:01:59.568361 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:02:00 crc kubenswrapper[4835]: I0201 08:02:00.538639 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:00 crc kubenswrapper[4835]: I0201 08:02:00.566519 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:02:00 crc kubenswrapper[4835]: I0201 08:02:00.566593 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:02:00 crc kubenswrapper[4835]: I0201 08:02:00.566621 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:02:00 crc kubenswrapper[4835]: I0201 08:02:00.566716 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:02:00 crc kubenswrapper[4835]: I0201 08:02:00.566749 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:02:00 crc kubenswrapper[4835]: E0201 08:02:00.567027 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:02:02 crc kubenswrapper[4835]: I0201 08:02:02.538346 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:02 crc kubenswrapper[4835]: I0201 08:02:02.567106 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:02:02 crc kubenswrapper[4835]: I0201 08:02:02.567149 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:02:02 crc kubenswrapper[4835]: E0201 08:02:02.567571 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.537461 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.537582 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.538944 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"c213ea49d7dafd73fec0de5cdaa6e768dd362d5894fea2a2068751be2aed6e08"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.538998 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.539045 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://c213ea49d7dafd73fec0de5cdaa6e768dd362d5894fea2a2068751be2aed6e08" gracePeriod=30 Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.541074 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.694749 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="c213ea49d7dafd73fec0de5cdaa6e768dd362d5894fea2a2068751be2aed6e08" exitCode=0 Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.694800 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"c213ea49d7dafd73fec0de5cdaa6e768dd362d5894fea2a2068751be2aed6e08"} Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.694838 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:02:03 crc kubenswrapper[4835]: E0201 08:02:03.888391 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:04 crc kubenswrapper[4835]: I0201 08:02:04.705269 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9"} Feb 01 08:02:04 crc kubenswrapper[4835]: I0201 08:02:04.706320 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:02:04 crc kubenswrapper[4835]: I0201 08:02:04.706182 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:04 crc kubenswrapper[4835]: E0201 08:02:04.707237 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:05 crc kubenswrapper[4835]: I0201 08:02:05.567940 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:02:05 crc kubenswrapper[4835]: I0201 08:02:05.568109 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:02:05 crc kubenswrapper[4835]: I0201 08:02:05.568360 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:02:05 crc kubenswrapper[4835]: E0201 08:02:05.569004 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:02:05 crc kubenswrapper[4835]: I0201 08:02:05.717049 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:05 crc kubenswrapper[4835]: E0201 08:02:05.717364 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:09 crc kubenswrapper[4835]: I0201 08:02:09.538571 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:11 crc kubenswrapper[4835]: I0201 08:02:11.566780 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:02:11 crc kubenswrapper[4835]: I0201 08:02:11.567083 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:02:11 crc kubenswrapper[4835]: I0201 08:02:11.567167 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:02:11 crc kubenswrapper[4835]: E0201 08:02:11.567431 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:02:12 crc kubenswrapper[4835]: I0201 08:02:12.537688 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:12 crc kubenswrapper[4835]: I0201 08:02:12.537831 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:12 crc kubenswrapper[4835]: I0201 08:02:12.566925 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:02:12 crc kubenswrapper[4835]: E0201 08:02:12.567322 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:02:13 crc kubenswrapper[4835]: I0201 08:02:13.568323 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:02:13 crc kubenswrapper[4835]: I0201 08:02:13.568523 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:02:13 crc kubenswrapper[4835]: I0201 08:02:13.568579 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:02:13 crc kubenswrapper[4835]: I0201 08:02:13.568729 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:02:13 crc kubenswrapper[4835]: I0201 08:02:13.568809 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:02:13 crc kubenswrapper[4835]: E0201 08:02:13.569454 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.539383 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.539988 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.540968 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.540997 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.541031 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" gracePeriod=30 Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.542630 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:15 crc kubenswrapper[4835]: E0201 08:02:15.665343 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.814513 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" exitCode=0 Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.814571 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9"} Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.814618 4835 scope.go:117] "RemoveContainer" containerID="c213ea49d7dafd73fec0de5cdaa6e768dd362d5894fea2a2068751be2aed6e08" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.815295 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.815347 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:15 crc kubenswrapper[4835]: E0201 08:02:15.815758 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:16 crc kubenswrapper[4835]: I0201 08:02:16.567035 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:02:16 crc kubenswrapper[4835]: I0201 08:02:16.567063 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:02:16 crc kubenswrapper[4835]: E0201 08:02:16.567353 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:02:18 crc kubenswrapper[4835]: I0201 08:02:18.567969 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:02:18 crc kubenswrapper[4835]: I0201 08:02:18.568534 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:02:18 crc kubenswrapper[4835]: I0201 08:02:18.568780 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:02:18 crc kubenswrapper[4835]: E0201 08:02:18.569531 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.907926 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" exitCode=1 Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.907984 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a"} Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.908599 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.909516 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.909615 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.909645 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.909716 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.909744 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.909800 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:02:24 crc kubenswrapper[4835]: E0201 08:02:24.910179 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:02:26 crc kubenswrapper[4835]: I0201 08:02:26.567486 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:02:26 crc kubenswrapper[4835]: I0201 08:02:26.567911 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:02:26 crc kubenswrapper[4835]: I0201 08:02:26.567957 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:02:26 crc kubenswrapper[4835]: I0201 08:02:26.568092 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:02:26 crc kubenswrapper[4835]: E0201 08:02:26.568509 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:02:26 crc kubenswrapper[4835]: E0201 08:02:26.568681 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:02:27 crc kubenswrapper[4835]: I0201 08:02:27.572188 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:02:27 crc kubenswrapper[4835]: I0201 08:02:27.572214 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:27 crc kubenswrapper[4835]: I0201 08:02:27.572402 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:02:27 crc kubenswrapper[4835]: I0201 08:02:27.572456 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:02:27 crc kubenswrapper[4835]: E0201 08:02:27.572452 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:27 crc kubenswrapper[4835]: E0201 08:02:27.572719 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:02:31 crc kubenswrapper[4835]: I0201 08:02:31.567213 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:02:31 crc kubenswrapper[4835]: I0201 08:02:31.567952 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:02:31 crc kubenswrapper[4835]: I0201 08:02:31.568069 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:02:31 crc kubenswrapper[4835]: E0201 08:02:31.568436 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:02:37 crc kubenswrapper[4835]: I0201 08:02:37.572753 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:02:37 crc kubenswrapper[4835]: E0201 08:02:37.573625 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.567494 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.567530 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:38 crc kubenswrapper[4835]: E0201 08:02:38.567765 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.568098 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.568336 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.568441 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.568575 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.568593 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.568680 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:02:38 crc kubenswrapper[4835]: E0201 08:02:38.569626 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:02:40 crc kubenswrapper[4835]: I0201 08:02:40.567096 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:02:40 crc kubenswrapper[4835]: I0201 08:02:40.567533 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:02:40 crc kubenswrapper[4835]: I0201 08:02:40.567670 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:02:40 crc kubenswrapper[4835]: E0201 08:02:40.568096 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:02:41 crc kubenswrapper[4835]: I0201 08:02:41.567672 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:02:41 crc kubenswrapper[4835]: I0201 08:02:41.567719 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:02:41 crc kubenswrapper[4835]: E0201 08:02:41.568245 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:02:43 crc kubenswrapper[4835]: I0201 08:02:43.567661 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:02:43 crc kubenswrapper[4835]: I0201 08:02:43.568060 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:02:43 crc kubenswrapper[4835]: I0201 08:02:43.568196 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:02:43 crc kubenswrapper[4835]: E0201 08:02:43.568562 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:02:49 crc kubenswrapper[4835]: I0201 08:02:49.567097 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:02:49 crc kubenswrapper[4835]: I0201 08:02:49.567590 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:49 crc kubenswrapper[4835]: E0201 08:02:49.567973 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:51 crc kubenswrapper[4835]: I0201 08:02:51.567573 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:02:51 crc kubenswrapper[4835]: E0201 08:02:51.568627 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:02:53 crc kubenswrapper[4835]: I0201 08:02:53.567222 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:02:53 crc kubenswrapper[4835]: I0201 08:02:53.567614 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:02:53 crc kubenswrapper[4835]: I0201 08:02:53.567635 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:02:53 crc kubenswrapper[4835]: I0201 08:02:53.567679 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:02:53 crc kubenswrapper[4835]: I0201 08:02:53.567686 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:02:53 crc kubenswrapper[4835]: I0201 08:02:53.567717 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:02:53 crc kubenswrapper[4835]: E0201 08:02:53.568021 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:02:54 crc kubenswrapper[4835]: I0201 08:02:54.566677 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:02:54 crc kubenswrapper[4835]: I0201 08:02:54.566763 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:02:54 crc kubenswrapper[4835]: I0201 08:02:54.566880 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:02:54 crc kubenswrapper[4835]: E0201 08:02:54.567186 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:02:54 crc kubenswrapper[4835]: I0201 08:02:54.569496 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:02:54 crc kubenswrapper[4835]: I0201 08:02:54.569744 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:02:54 crc kubenswrapper[4835]: E0201 08:02:54.570448 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:02:55 crc kubenswrapper[4835]: I0201 08:02:55.567512 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:02:55 crc kubenswrapper[4835]: I0201 08:02:55.567639 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:02:55 crc kubenswrapper[4835]: I0201 08:02:55.567820 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:02:55 crc kubenswrapper[4835]: E0201 08:02:55.568338 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:02:57 crc kubenswrapper[4835]: I0201 08:02:57.328496 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:02:57 crc kubenswrapper[4835]: E0201 08:02:57.328758 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:02:57 crc kubenswrapper[4835]: E0201 08:02:57.329131 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:04:59.329102704 +0000 UTC m=+2572.449539168 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:03:01 crc kubenswrapper[4835]: I0201 08:03:01.566744 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:03:01 crc kubenswrapper[4835]: I0201 08:03:01.567137 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:03:01 crc kubenswrapper[4835]: E0201 08:03:01.567776 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:03:03 crc kubenswrapper[4835]: I0201 08:03:03.566786 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:03:03 crc kubenswrapper[4835]: E0201 08:03:03.567185 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:03:06 crc kubenswrapper[4835]: I0201 08:03:06.567080 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:03:06 crc kubenswrapper[4835]: I0201 08:03:06.567134 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:03:06 crc kubenswrapper[4835]: E0201 08:03:06.567398 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:03:07 crc kubenswrapper[4835]: I0201 08:03:07.573468 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:03:07 crc kubenswrapper[4835]: I0201 08:03:07.573543 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:03:07 crc kubenswrapper[4835]: I0201 08:03:07.573568 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:03:07 crc kubenswrapper[4835]: I0201 08:03:07.573620 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:03:07 crc kubenswrapper[4835]: I0201 08:03:07.573628 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:03:07 crc kubenswrapper[4835]: I0201 08:03:07.573664 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:03:07 crc kubenswrapper[4835]: E0201 08:03:07.574020 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:03:07 crc kubenswrapper[4835]: E0201 08:03:07.905851 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:03:08 crc kubenswrapper[4835]: I0201 08:03:08.296991 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:03:08 crc kubenswrapper[4835]: I0201 08:03:08.567624 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:03:08 crc kubenswrapper[4835]: I0201 08:03:08.567750 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:03:08 crc kubenswrapper[4835]: I0201 08:03:08.567922 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:03:08 crc kubenswrapper[4835]: E0201 08:03:08.568588 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:03:10 crc kubenswrapper[4835]: I0201 08:03:10.568588 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:03:10 crc kubenswrapper[4835]: I0201 08:03:10.569110 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:03:10 crc kubenswrapper[4835]: I0201 08:03:10.569288 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:03:10 crc kubenswrapper[4835]: E0201 08:03:10.569858 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.363368 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="7281a9d7c1d9d8dc16a17f203151e4b7970267f00d4334688eaa717a6dc5211c" exitCode=1 Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.363453 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"7281a9d7c1d9d8dc16a17f203151e4b7970267f00d4334688eaa717a6dc5211c"} Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.364128 4835 scope.go:117] "RemoveContainer" containerID="c79ff7541114600de37a172509eea1cb11eec93c315c86aafccf0b9d756e98ea" Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.365272 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.365495 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.365742 4835 scope.go:117] "RemoveContainer" containerID="7281a9d7c1d9d8dc16a17f203151e4b7970267f00d4334688eaa717a6dc5211c" Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.365807 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:03:14 crc kubenswrapper[4835]: E0201 08:03:14.366571 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.566598 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:03:14 crc kubenswrapper[4835]: E0201 08:03:14.566857 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:03:15 crc kubenswrapper[4835]: I0201 08:03:15.567027 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:03:15 crc kubenswrapper[4835]: I0201 08:03:15.567074 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:03:15 crc kubenswrapper[4835]: E0201 08:03:15.567547 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:03:19 crc kubenswrapper[4835]: I0201 08:03:19.567652 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:03:19 crc kubenswrapper[4835]: I0201 08:03:19.568054 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:03:19 crc kubenswrapper[4835]: E0201 08:03:19.568533 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567025 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567114 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567144 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567241 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567252 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567296 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567307 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567433 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567558 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:03:22 crc kubenswrapper[4835]: E0201 08:03:22.567822 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:03:22 crc kubenswrapper[4835]: E0201 08:03:22.723313 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:03:23 crc kubenswrapper[4835]: I0201 08:03:23.474885 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098"} Feb 01 08:03:23 crc kubenswrapper[4835]: I0201 08:03:23.476205 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:03:23 crc kubenswrapper[4835]: I0201 08:03:23.476361 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:03:23 crc kubenswrapper[4835]: I0201 08:03:23.476604 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:03:23 crc kubenswrapper[4835]: I0201 08:03:23.476640 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:03:23 crc kubenswrapper[4835]: I0201 08:03:23.476729 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:03:23 crc kubenswrapper[4835]: E0201 08:03:23.477534 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:03:28 crc kubenswrapper[4835]: I0201 08:03:28.567118 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:03:28 crc kubenswrapper[4835]: I0201 08:03:28.567646 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:03:28 crc kubenswrapper[4835]: I0201 08:03:28.567769 4835 scope.go:117] "RemoveContainer" containerID="7281a9d7c1d9d8dc16a17f203151e4b7970267f00d4334688eaa717a6dc5211c" Feb 01 08:03:28 crc kubenswrapper[4835]: I0201 08:03:28.567781 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:03:28 crc kubenswrapper[4835]: E0201 08:03:28.765013 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:03:29 crc kubenswrapper[4835]: I0201 08:03:29.538462 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"119c4ce439526ad42a1eff794697c49a5fd68c0530ba39ed7782d5829e417565"} Feb 01 08:03:29 crc kubenswrapper[4835]: I0201 08:03:29.539085 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:03:29 crc kubenswrapper[4835]: I0201 08:03:29.539146 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:03:29 crc kubenswrapper[4835]: I0201 08:03:29.539232 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:03:29 crc kubenswrapper[4835]: E0201 08:03:29.539500 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:03:29 crc kubenswrapper[4835]: I0201 08:03:29.567599 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:03:29 crc kubenswrapper[4835]: I0201 08:03:29.567641 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:03:29 crc kubenswrapper[4835]: I0201 08:03:29.567820 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:03:29 crc kubenswrapper[4835]: E0201 08:03:29.567934 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:03:29 crc kubenswrapper[4835]: E0201 08:03:29.568175 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:03:32 crc kubenswrapper[4835]: I0201 08:03:32.567399 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:03:32 crc kubenswrapper[4835]: I0201 08:03:32.567818 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:03:32 crc kubenswrapper[4835]: E0201 08:03:32.568389 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.096140 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7j2wj"] Feb 01 08:03:35 crc kubenswrapper[4835]: E0201 08:03:35.097015 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c36c8d-897d-4b88-a236-44fe0d511c4e" containerName="keystone-cron" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.097035 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c36c8d-897d-4b88-a236-44fe0d511c4e" containerName="keystone-cron" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.097334 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0c36c8d-897d-4b88-a236-44fe0d511c4e" containerName="keystone-cron" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.099179 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.114456 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7j2wj"] Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.177598 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njknp\" (UniqueName: \"kubernetes.io/projected/bebc21e2-e3f2-411b-ade8-2c3137cc286e-kube-api-access-njknp\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.177706 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-utilities\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.177785 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-catalog-content\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.279265 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-catalog-content\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.279387 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njknp\" (UniqueName: \"kubernetes.io/projected/bebc21e2-e3f2-411b-ade8-2c3137cc286e-kube-api-access-njknp\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.279472 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-utilities\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.279994 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-catalog-content\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.279994 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-utilities\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.302059 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njknp\" (UniqueName: \"kubernetes.io/projected/bebc21e2-e3f2-411b-ade8-2c3137cc286e-kube-api-access-njknp\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.419723 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.568525 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.569045 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.569248 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:03:35 crc kubenswrapper[4835]: E0201 08:03:35.569717 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.569988 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.570103 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.570181 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.570190 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.570223 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:03:35 crc kubenswrapper[4835]: E0201 08:03:35.570795 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.846388 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7j2wj"] Feb 01 08:03:36 crc kubenswrapper[4835]: I0201 08:03:36.610449 4835 generic.go:334] "Generic (PLEG): container finished" podID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerID="3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7" exitCode=0 Feb 01 08:03:36 crc kubenswrapper[4835]: I0201 08:03:36.610554 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2wj" event={"ID":"bebc21e2-e3f2-411b-ade8-2c3137cc286e","Type":"ContainerDied","Data":"3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7"} Feb 01 08:03:36 crc kubenswrapper[4835]: I0201 08:03:36.610889 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2wj" event={"ID":"bebc21e2-e3f2-411b-ade8-2c3137cc286e","Type":"ContainerStarted","Data":"58ed440a20da43dc583d7240e9212e0673c19c5402a0c25302ee77b406df25bc"} Feb 01 08:03:36 crc kubenswrapper[4835]: I0201 08:03:36.612354 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 08:03:37 crc kubenswrapper[4835]: I0201 08:03:37.623029 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2wj" event={"ID":"bebc21e2-e3f2-411b-ade8-2c3137cc286e","Type":"ContainerStarted","Data":"aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd"} Feb 01 08:03:38 crc kubenswrapper[4835]: I0201 08:03:38.637943 4835 generic.go:334] "Generic (PLEG): container finished" podID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerID="aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd" exitCode=0 Feb 01 08:03:38 crc kubenswrapper[4835]: I0201 08:03:38.638042 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2wj" event={"ID":"bebc21e2-e3f2-411b-ade8-2c3137cc286e","Type":"ContainerDied","Data":"aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd"} Feb 01 08:03:39 crc kubenswrapper[4835]: I0201 08:03:39.649556 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2wj" event={"ID":"bebc21e2-e3f2-411b-ade8-2c3137cc286e","Type":"ContainerStarted","Data":"77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457"} Feb 01 08:03:39 crc kubenswrapper[4835]: I0201 08:03:39.675362 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7j2wj" podStartSLOduration=2.209158762 podStartE2EDuration="4.675336864s" podCreationTimestamp="2026-02-01 08:03:35 +0000 UTC" firstStartedPulling="2026-02-01 08:03:36.612151049 +0000 UTC m=+2489.732587483" lastFinishedPulling="2026-02-01 08:03:39.078329111 +0000 UTC m=+2492.198765585" observedRunningTime="2026-02-01 08:03:39.665391805 +0000 UTC m=+2492.785828229" watchObservedRunningTime="2026-02-01 08:03:39.675336864 +0000 UTC m=+2492.795773328" Feb 01 08:03:40 crc kubenswrapper[4835]: I0201 08:03:40.566465 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:03:40 crc kubenswrapper[4835]: E0201 08:03:40.566764 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:03:40 crc kubenswrapper[4835]: I0201 08:03:40.567670 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:03:40 crc kubenswrapper[4835]: I0201 08:03:40.567752 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:03:40 crc kubenswrapper[4835]: I0201 08:03:40.567874 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:03:40 crc kubenswrapper[4835]: E0201 08:03:40.568240 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:03:44 crc kubenswrapper[4835]: I0201 08:03:44.566858 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:03:44 crc kubenswrapper[4835]: I0201 08:03:44.567188 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:03:44 crc kubenswrapper[4835]: I0201 08:03:44.567283 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:03:44 crc kubenswrapper[4835]: I0201 08:03:44.567307 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:03:44 crc kubenswrapper[4835]: E0201 08:03:44.567405 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:03:44 crc kubenswrapper[4835]: E0201 08:03:44.567535 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:03:45 crc kubenswrapper[4835]: I0201 08:03:45.420214 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:45 crc kubenswrapper[4835]: I0201 08:03:45.420267 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:46 crc kubenswrapper[4835]: I0201 08:03:46.466469 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7j2wj" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="registry-server" probeResult="failure" output=< Feb 01 08:03:46 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 01 08:03:46 crc kubenswrapper[4835]: > Feb 01 08:03:47 crc kubenswrapper[4835]: I0201 08:03:47.591107 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:03:47 crc kubenswrapper[4835]: I0201 08:03:47.591196 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:03:47 crc kubenswrapper[4835]: I0201 08:03:47.591308 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:03:47 crc kubenswrapper[4835]: E0201 08:03:47.592123 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:03:49 crc kubenswrapper[4835]: I0201 08:03:49.568520 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:03:49 crc kubenswrapper[4835]: I0201 08:03:49.568807 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:03:49 crc kubenswrapper[4835]: I0201 08:03:49.568880 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:03:49 crc kubenswrapper[4835]: I0201 08:03:49.568888 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:03:49 crc kubenswrapper[4835]: I0201 08:03:49.568918 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:03:49 crc kubenswrapper[4835]: E0201 08:03:49.569183 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:03:52 crc kubenswrapper[4835]: I0201 08:03:52.567181 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:03:52 crc kubenswrapper[4835]: E0201 08:03:52.567714 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:03:52 crc kubenswrapper[4835]: I0201 08:03:52.567878 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:03:52 crc kubenswrapper[4835]: I0201 08:03:52.567939 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:03:52 crc kubenswrapper[4835]: I0201 08:03:52.568035 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:03:52 crc kubenswrapper[4835]: E0201 08:03:52.568320 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:03:55 crc kubenswrapper[4835]: I0201 08:03:55.484727 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:55 crc kubenswrapper[4835]: I0201 08:03:55.558282 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:55 crc kubenswrapper[4835]: I0201 08:03:55.737029 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7j2wj"] Feb 01 08:03:56 crc kubenswrapper[4835]: I0201 08:03:56.805612 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7j2wj" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="registry-server" containerID="cri-o://77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457" gracePeriod=2 Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.165756 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.273160 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njknp\" (UniqueName: \"kubernetes.io/projected/bebc21e2-e3f2-411b-ade8-2c3137cc286e-kube-api-access-njknp\") pod \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.273317 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-utilities\") pod \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.273353 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-catalog-content\") pod \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.274246 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-utilities" (OuterVolumeSpecName: "utilities") pod "bebc21e2-e3f2-411b-ade8-2c3137cc286e" (UID: "bebc21e2-e3f2-411b-ade8-2c3137cc286e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.286578 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebc21e2-e3f2-411b-ade8-2c3137cc286e-kube-api-access-njknp" (OuterVolumeSpecName: "kube-api-access-njknp") pod "bebc21e2-e3f2-411b-ade8-2c3137cc286e" (UID: "bebc21e2-e3f2-411b-ade8-2c3137cc286e"). InnerVolumeSpecName "kube-api-access-njknp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.375102 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.375132 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njknp\" (UniqueName: \"kubernetes.io/projected/bebc21e2-e3f2-411b-ade8-2c3137cc286e-kube-api-access-njknp\") on node \"crc\" DevicePath \"\"" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.408741 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bebc21e2-e3f2-411b-ade8-2c3137cc286e" (UID: "bebc21e2-e3f2-411b-ade8-2c3137cc286e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.476126 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.576792 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.576837 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:03:57 crc kubenswrapper[4835]: E0201 08:03:57.577311 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.817832 4835 generic.go:334] "Generic (PLEG): container finished" podID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerID="77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457" exitCode=0 Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.817880 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2wj" event={"ID":"bebc21e2-e3f2-411b-ade8-2c3137cc286e","Type":"ContainerDied","Data":"77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457"} Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.817931 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2wj" event={"ID":"bebc21e2-e3f2-411b-ade8-2c3137cc286e","Type":"ContainerDied","Data":"58ed440a20da43dc583d7240e9212e0673c19c5402a0c25302ee77b406df25bc"} Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.817951 4835 scope.go:117] "RemoveContainer" containerID="77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.818152 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.848977 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7j2wj"] Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.853278 4835 scope.go:117] "RemoveContainer" containerID="aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.855571 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7j2wj"] Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.876270 4835 scope.go:117] "RemoveContainer" containerID="3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.923346 4835 scope.go:117] "RemoveContainer" containerID="77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457" Feb 01 08:03:57 crc kubenswrapper[4835]: E0201 08:03:57.923965 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457\": container with ID starting with 77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457 not found: ID does not exist" containerID="77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.924019 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457"} err="failed to get container status \"77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457\": rpc error: code = NotFound desc = could not find container \"77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457\": container with ID starting with 77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457 not found: ID does not exist" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.924050 4835 scope.go:117] "RemoveContainer" containerID="aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd" Feb 01 08:03:57 crc kubenswrapper[4835]: E0201 08:03:57.927912 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd\": container with ID starting with aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd not found: ID does not exist" containerID="aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.928039 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd"} err="failed to get container status \"aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd\": rpc error: code = NotFound desc = could not find container \"aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd\": container with ID starting with aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd not found: ID does not exist" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.929071 4835 scope.go:117] "RemoveContainer" containerID="3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7" Feb 01 08:03:57 crc kubenswrapper[4835]: E0201 08:03:57.929674 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7\": container with ID starting with 3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7 not found: ID does not exist" containerID="3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.929718 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7"} err="failed to get container status \"3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7\": rpc error: code = NotFound desc = could not find container \"3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7\": container with ID starting with 3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7 not found: ID does not exist" Feb 01 08:03:58 crc kubenswrapper[4835]: I0201 08:03:58.566822 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:03:58 crc kubenswrapper[4835]: I0201 08:03:58.567187 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:03:58 crc kubenswrapper[4835]: E0201 08:03:58.567606 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:03:59 crc kubenswrapper[4835]: I0201 08:03:59.567434 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:03:59 crc kubenswrapper[4835]: I0201 08:03:59.567519 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:03:59 crc kubenswrapper[4835]: I0201 08:03:59.567635 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:03:59 crc kubenswrapper[4835]: E0201 08:03:59.567956 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:03:59 crc kubenswrapper[4835]: I0201 08:03:59.575400 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" path="/var/lib/kubelet/pods/bebc21e2-e3f2-411b-ade8-2c3137cc286e/volumes" Feb 01 08:04:02 crc kubenswrapper[4835]: I0201 08:04:02.568067 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:04:02 crc kubenswrapper[4835]: I0201 08:04:02.568603 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:04:02 crc kubenswrapper[4835]: I0201 08:04:02.568763 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:04:02 crc kubenswrapper[4835]: I0201 08:04:02.568813 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:04:02 crc kubenswrapper[4835]: I0201 08:04:02.568895 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:04:02 crc kubenswrapper[4835]: E0201 08:04:02.569482 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:04:03 crc kubenswrapper[4835]: I0201 08:04:03.567693 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:04:03 crc kubenswrapper[4835]: I0201 08:04:03.880303 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"946bdd545dcf0243e8d2cbdd7bcdfb0181a2c4c626eff76dbf1ecf3e068ec549"} Feb 01 08:04:06 crc kubenswrapper[4835]: I0201 08:04:06.567244 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:04:06 crc kubenswrapper[4835]: I0201 08:04:06.567921 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:04:06 crc kubenswrapper[4835]: I0201 08:04:06.568101 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:04:06 crc kubenswrapper[4835]: E0201 08:04:06.568612 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:04:10 crc kubenswrapper[4835]: I0201 08:04:10.566736 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:10 crc kubenswrapper[4835]: I0201 08:04:10.568168 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:04:10 crc kubenswrapper[4835]: E0201 08:04:10.568561 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:11 crc kubenswrapper[4835]: I0201 08:04:11.567275 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:11 crc kubenswrapper[4835]: I0201 08:04:11.567783 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:04:11 crc kubenswrapper[4835]: E0201 08:04:11.568232 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:12 crc kubenswrapper[4835]: I0201 08:04:12.996808 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996" exitCode=1 Feb 01 08:04:12 crc kubenswrapper[4835]: I0201 08:04:12.996920 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996"} Feb 01 08:04:12 crc kubenswrapper[4835]: I0201 08:04:12.997129 4835 scope.go:117] "RemoveContainer" containerID="700112fad0f4ad91d48c44e77419088f8f3cdd322d0db821e4eac71b3672a4b2" Feb 01 08:04:12 crc kubenswrapper[4835]: I0201 08:04:12.997815 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:04:12 crc kubenswrapper[4835]: I0201 08:04:12.997865 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:04:12 crc kubenswrapper[4835]: I0201 08:04:12.997935 4835 scope.go:117] "RemoveContainer" containerID="7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996" Feb 01 08:04:12 crc kubenswrapper[4835]: I0201 08:04:12.997954 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:04:13 crc kubenswrapper[4835]: E0201 08:04:13.519267 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:13 crc kubenswrapper[4835]: I0201 08:04:13.566384 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:04:13 crc kubenswrapper[4835]: I0201 08:04:13.566528 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:04:13 crc kubenswrapper[4835]: I0201 08:04:13.566638 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:04:13 crc kubenswrapper[4835]: I0201 08:04:13.566646 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:04:13 crc kubenswrapper[4835]: I0201 08:04:13.566680 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:04:13 crc kubenswrapper[4835]: E0201 08:04:13.567036 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.014651 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" exitCode=1 Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.015612 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" exitCode=1 Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.014721 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41"} Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.015767 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5"} Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.015852 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541"} Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.015923 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.016552 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.016643 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.016754 4835 scope.go:117] "RemoveContainer" containerID="7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996" Feb 01 08:04:14 crc kubenswrapper[4835]: E0201 08:04:14.017116 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.072529 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:04:15 crc kubenswrapper[4835]: I0201 08:04:15.037954 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" exitCode=1 Feb 01 08:04:15 crc kubenswrapper[4835]: I0201 08:04:15.038014 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41"} Feb 01 08:04:15 crc kubenswrapper[4835]: I0201 08:04:15.038066 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:04:15 crc kubenswrapper[4835]: I0201 08:04:15.038976 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:04:15 crc kubenswrapper[4835]: I0201 08:04:15.039093 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:04:15 crc kubenswrapper[4835]: I0201 08:04:15.039241 4835 scope.go:117] "RemoveContainer" containerID="7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996" Feb 01 08:04:15 crc kubenswrapper[4835]: I0201 08:04:15.039255 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:04:15 crc kubenswrapper[4835]: E0201 08:04:15.039821 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:16 crc kubenswrapper[4835]: I0201 08:04:16.065212 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:04:16 crc kubenswrapper[4835]: I0201 08:04:16.065277 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:04:16 crc kubenswrapper[4835]: I0201 08:04:16.065349 4835 scope.go:117] "RemoveContainer" containerID="7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996" Feb 01 08:04:16 crc kubenswrapper[4835]: I0201 08:04:16.065355 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:04:16 crc kubenswrapper[4835]: E0201 08:04:16.065774 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:20 crc kubenswrapper[4835]: I0201 08:04:20.566522 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:04:20 crc kubenswrapper[4835]: I0201 08:04:20.566868 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:04:20 crc kubenswrapper[4835]: I0201 08:04:20.566971 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:04:21 crc kubenswrapper[4835]: I0201 08:04:21.116290 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8"} Feb 01 08:04:21 crc kubenswrapper[4835]: I0201 08:04:21.116842 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e"} Feb 01 08:04:21 crc kubenswrapper[4835]: I0201 08:04:21.116915 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136"} Feb 01 08:04:21 crc kubenswrapper[4835]: I0201 08:04:21.155482 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-2" podStartSLOduration=351.155461568 podStartE2EDuration="5m51.155461568s" podCreationTimestamp="2026-02-01 07:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 08:04:21.151207657 +0000 UTC m=+2534.271644111" watchObservedRunningTime="2026-02-01 08:04:21.155461568 +0000 UTC m=+2534.275898022" Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.136686 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" exitCode=1 Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.136725 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" exitCode=1 Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.136735 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" exitCode=1 Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.136744 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8"} Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.136797 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e"} Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.136816 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.136951 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136"} Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.137573 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.137663 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.137795 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:04:22 crc kubenswrapper[4835]: E0201 08:04:22.138272 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.187470 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.229002 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:04:23 crc kubenswrapper[4835]: I0201 08:04:23.151756 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:04:23 crc kubenswrapper[4835]: I0201 08:04:23.151844 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:04:23 crc kubenswrapper[4835]: I0201 08:04:23.151986 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:04:23 crc kubenswrapper[4835]: E0201 08:04:23.152295 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:04:24 crc kubenswrapper[4835]: I0201 08:04:24.566844 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:24 crc kubenswrapper[4835]: I0201 08:04:24.566886 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:04:24 crc kubenswrapper[4835]: E0201 08:04:24.567219 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:26 crc kubenswrapper[4835]: I0201 08:04:26.567049 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:26 crc kubenswrapper[4835]: I0201 08:04:26.567111 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:04:26 crc kubenswrapper[4835]: E0201 08:04:26.567653 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:27 crc kubenswrapper[4835]: I0201 08:04:27.574590 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:04:27 crc kubenswrapper[4835]: I0201 08:04:27.574680 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:04:27 crc kubenswrapper[4835]: I0201 08:04:27.574778 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:04:27 crc kubenswrapper[4835]: I0201 08:04:27.574789 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:04:27 crc kubenswrapper[4835]: I0201 08:04:27.574833 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:04:27 crc kubenswrapper[4835]: E0201 08:04:27.576519 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:04:28 crc kubenswrapper[4835]: I0201 08:04:28.567848 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:04:28 crc kubenswrapper[4835]: I0201 08:04:28.568252 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:04:28 crc kubenswrapper[4835]: I0201 08:04:28.568452 4835 scope.go:117] "RemoveContainer" containerID="7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996" Feb 01 08:04:28 crc kubenswrapper[4835]: I0201 08:04:28.568467 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:04:28 crc kubenswrapper[4835]: E0201 08:04:28.784116 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:29 crc kubenswrapper[4835]: I0201 08:04:29.225098 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c"} Feb 01 08:04:29 crc kubenswrapper[4835]: I0201 08:04:29.226110 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:04:29 crc kubenswrapper[4835]: I0201 08:04:29.226254 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:04:29 crc kubenswrapper[4835]: I0201 08:04:29.226455 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:04:29 crc kubenswrapper[4835]: E0201 08:04:29.226840 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:35 crc kubenswrapper[4835]: I0201 08:04:35.572619 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:04:35 crc kubenswrapper[4835]: I0201 08:04:35.573141 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:04:35 crc kubenswrapper[4835]: I0201 08:04:35.573261 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:04:35 crc kubenswrapper[4835]: E0201 08:04:35.573660 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:04:36 crc kubenswrapper[4835]: I0201 08:04:36.566854 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:36 crc kubenswrapper[4835]: I0201 08:04:36.567278 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:04:36 crc kubenswrapper[4835]: E0201 08:04:36.567718 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:39 crc kubenswrapper[4835]: I0201 08:04:39.568322 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:04:39 crc kubenswrapper[4835]: I0201 08:04:39.568846 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:04:39 crc kubenswrapper[4835]: I0201 08:04:39.568999 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:04:39 crc kubenswrapper[4835]: I0201 08:04:39.569015 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:04:39 crc kubenswrapper[4835]: I0201 08:04:39.569082 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:04:39 crc kubenswrapper[4835]: E0201 08:04:39.569697 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:04:41 crc kubenswrapper[4835]: I0201 08:04:41.567200 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:41 crc kubenswrapper[4835]: I0201 08:04:41.567709 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:04:41 crc kubenswrapper[4835]: E0201 08:04:41.764371 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:42 crc kubenswrapper[4835]: I0201 08:04:42.343838 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61"} Feb 01 08:04:42 crc kubenswrapper[4835]: I0201 08:04:42.344263 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:04:42 crc kubenswrapper[4835]: I0201 08:04:42.344786 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:42 crc kubenswrapper[4835]: E0201 08:04:42.345335 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.356069 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" exitCode=1 Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.356115 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61"} Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.356146 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.356700 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.356723 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:04:43 crc kubenswrapper[4835]: E0201 08:04:43.357113 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.567325 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.567525 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.567707 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:04:43 crc kubenswrapper[4835]: E0201 08:04:43.568177 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:44 crc kubenswrapper[4835]: I0201 08:04:44.385897 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:44 crc kubenswrapper[4835]: I0201 08:04:44.386321 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:04:44 crc kubenswrapper[4835]: E0201 08:04:44.386670 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:45 crc kubenswrapper[4835]: I0201 08:04:45.535953 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:04:45 crc kubenswrapper[4835]: I0201 08:04:45.536853 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:45 crc kubenswrapper[4835]: I0201 08:04:45.536876 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:04:45 crc kubenswrapper[4835]: E0201 08:04:45.537376 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:47 crc kubenswrapper[4835]: I0201 08:04:47.578155 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:04:47 crc kubenswrapper[4835]: I0201 08:04:47.578972 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:04:47 crc kubenswrapper[4835]: I0201 08:04:47.579234 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:04:47 crc kubenswrapper[4835]: E0201 08:04:47.579915 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:04:49 crc kubenswrapper[4835]: I0201 08:04:49.567617 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:49 crc kubenswrapper[4835]: I0201 08:04:49.569576 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:04:49 crc kubenswrapper[4835]: E0201 08:04:49.788145 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.446528 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec"} Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.446850 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.447180 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:50 crc kubenswrapper[4835]: E0201 08:04:50.447502 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.568599 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.568749 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.568901 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.568916 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.568979 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:04:50 crc kubenswrapper[4835]: E0201 08:04:50.569633 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:04:51 crc kubenswrapper[4835]: I0201 08:04:51.462894 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" exitCode=1 Feb 01 08:04:51 crc kubenswrapper[4835]: I0201 08:04:51.462963 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec"} Feb 01 08:04:51 crc kubenswrapper[4835]: I0201 08:04:51.463510 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:04:51 crc kubenswrapper[4835]: I0201 08:04:51.463738 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:51 crc kubenswrapper[4835]: I0201 08:04:51.463770 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:04:51 crc kubenswrapper[4835]: E0201 08:04:51.464147 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:52 crc kubenswrapper[4835]: I0201 08:04:52.019750 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:04:52 crc kubenswrapper[4835]: I0201 08:04:52.477707 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:52 crc kubenswrapper[4835]: I0201 08:04:52.478072 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:04:52 crc kubenswrapper[4835]: E0201 08:04:52.478364 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:53 crc kubenswrapper[4835]: I0201 08:04:53.487624 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:53 crc kubenswrapper[4835]: I0201 08:04:53.487668 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:04:53 crc kubenswrapper[4835]: E0201 08:04:53.488059 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:54 crc kubenswrapper[4835]: I0201 08:04:54.567956 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:04:54 crc kubenswrapper[4835]: I0201 08:04:54.568484 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:04:54 crc kubenswrapper[4835]: I0201 08:04:54.568665 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:04:54 crc kubenswrapper[4835]: E0201 08:04:54.569134 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:57 crc kubenswrapper[4835]: I0201 08:04:57.579092 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:57 crc kubenswrapper[4835]: I0201 08:04:57.579152 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:04:57 crc kubenswrapper[4835]: E0201 08:04:57.579646 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:59 crc kubenswrapper[4835]: I0201 08:04:59.429173 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:04:59 crc kubenswrapper[4835]: E0201 08:04:59.429324 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:04:59 crc kubenswrapper[4835]: E0201 08:04:59.431686 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:07:01.431644452 +0000 UTC m=+2694.552080926 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:04:59 crc kubenswrapper[4835]: I0201 08:04:59.567918 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:04:59 crc kubenswrapper[4835]: I0201 08:04:59.568018 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:04:59 crc kubenswrapper[4835]: I0201 08:04:59.568177 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:04:59 crc kubenswrapper[4835]: E0201 08:04:59.568670 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.567526 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.567657 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.567842 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.567859 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.567930 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.588506 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d" exitCode=1 Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.588575 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d"} Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.588625 4835 scope.go:117] "RemoveContainer" containerID="6aaadf97ef22242cf5b15148b8cd42d71eb7c275654a87f6591085d77d846827" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.590511 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.590962 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.591302 4835 scope.go:117] "RemoveContainer" containerID="8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.591638 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:05:01 crc kubenswrapper[4835]: E0201 08:05:01.593110 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:05:02 crc kubenswrapper[4835]: E0201 08:05:02.308963 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.606159 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" exitCode=1 Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.606212 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" exitCode=1 Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.606301 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763"} Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.606338 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d"} Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.606357 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd"} Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.606379 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c"} Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.606405 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.607392 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.607548 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.607723 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:05:02 crc kubenswrapper[4835]: E0201 08:05:02.608219 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.677941 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.636870 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="119c4ce439526ad42a1eff794697c49a5fd68c0530ba39ed7782d5829e417565" exitCode=1 Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.636958 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"119c4ce439526ad42a1eff794697c49a5fd68c0530ba39ed7782d5829e417565"} Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.637031 4835 scope.go:117] "RemoveContainer" containerID="7281a9d7c1d9d8dc16a17f203151e4b7970267f00d4334688eaa717a6dc5211c" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.637840 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.637916 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.638003 4835 scope.go:117] "RemoveContainer" containerID="119c4ce439526ad42a1eff794697c49a5fd68c0530ba39ed7782d5829e417565" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.638029 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:05:03 crc kubenswrapper[4835]: E0201 08:05:03.638639 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.650807 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" exitCode=1 Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.650919 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" exitCode=1 Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.650866 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763"} Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.651056 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d"} Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.651696 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.651757 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.651878 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.651889 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.651930 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:05:03 crc kubenswrapper[4835]: E0201 08:05:03.652216 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.707049 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.751885 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:05:05 crc kubenswrapper[4835]: I0201 08:05:05.566985 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:05:05 crc kubenswrapper[4835]: I0201 08:05:05.567270 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:05:05 crc kubenswrapper[4835]: E0201 08:05:05.567548 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:05:11 crc kubenswrapper[4835]: E0201 08:05:11.298800 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:05:11 crc kubenswrapper[4835]: I0201 08:05:11.730572 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:05:12 crc kubenswrapper[4835]: I0201 08:05:12.566922 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:05:12 crc kubenswrapper[4835]: I0201 08:05:12.566970 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:05:12 crc kubenswrapper[4835]: E0201 08:05:12.567366 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:05:14 crc kubenswrapper[4835]: I0201 08:05:14.567759 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:05:14 crc kubenswrapper[4835]: I0201 08:05:14.568283 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:05:14 crc kubenswrapper[4835]: I0201 08:05:14.568322 4835 scope.go:117] "RemoveContainer" containerID="8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d" Feb 01 08:05:14 crc kubenswrapper[4835]: I0201 08:05:14.568450 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:05:14 crc kubenswrapper[4835]: E0201 08:05:14.568885 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.567553 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.567627 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.567700 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.567708 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.567740 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:05:15 crc kubenswrapper[4835]: E0201 08:05:15.760076 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.782246 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf"} Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.783386 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.783574 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.783837 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.783977 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:05:15 crc kubenswrapper[4835]: E0201 08:05:15.784528 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:16 crc kubenswrapper[4835]: I0201 08:05:16.567476 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:05:16 crc kubenswrapper[4835]: I0201 08:05:16.567620 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:05:16 crc kubenswrapper[4835]: I0201 08:05:16.567796 4835 scope.go:117] "RemoveContainer" containerID="119c4ce439526ad42a1eff794697c49a5fd68c0530ba39ed7782d5829e417565" Feb 01 08:05:16 crc kubenswrapper[4835]: I0201 08:05:16.567812 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:05:16 crc kubenswrapper[4835]: E0201 08:05:16.568391 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:05:18 crc kubenswrapper[4835]: I0201 08:05:18.566510 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:05:18 crc kubenswrapper[4835]: I0201 08:05:18.566913 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:05:18 crc kubenswrapper[4835]: E0201 08:05:18.567226 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:05:25 crc kubenswrapper[4835]: I0201 08:05:25.568563 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:05:25 crc kubenswrapper[4835]: I0201 08:05:25.570622 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:05:25 crc kubenswrapper[4835]: I0201 08:05:25.570685 4835 scope.go:117] "RemoveContainer" containerID="8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d" Feb 01 08:05:25 crc kubenswrapper[4835]: I0201 08:05:25.570810 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:05:25 crc kubenswrapper[4835]: E0201 08:05:25.571398 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:05:26 crc kubenswrapper[4835]: I0201 08:05:26.570584 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:05:26 crc kubenswrapper[4835]: I0201 08:05:26.570619 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:05:26 crc kubenswrapper[4835]: E0201 08:05:26.570905 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:05:29 crc kubenswrapper[4835]: I0201 08:05:29.566876 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:05:29 crc kubenswrapper[4835]: I0201 08:05:29.567345 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:05:29 crc kubenswrapper[4835]: E0201 08:05:29.567807 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568210 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568287 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568355 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568432 4835 scope.go:117] "RemoveContainer" containerID="119c4ce439526ad42a1eff794697c49a5fd68c0530ba39ed7782d5829e417565" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568445 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568508 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568710 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568808 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:05:30 crc kubenswrapper[4835]: E0201 08:05:30.569480 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:30 crc kubenswrapper[4835]: E0201 08:05:30.780316 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.935328 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd"} Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.935734 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.935791 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.935873 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:05:30 crc kubenswrapper[4835]: E0201 08:05:30.936091 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.977864 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" exitCode=1 Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.977938 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098"} Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.978805 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.979831 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.979957 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.980005 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.980163 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.980253 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:05:34 crc kubenswrapper[4835]: E0201 08:05:34.980910 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:36 crc kubenswrapper[4835]: I0201 08:05:36.567887 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:05:36 crc kubenswrapper[4835]: I0201 08:05:36.568324 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:05:36 crc kubenswrapper[4835]: I0201 08:05:36.568353 4835 scope.go:117] "RemoveContainer" containerID="8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d" Feb 01 08:05:36 crc kubenswrapper[4835]: I0201 08:05:36.568536 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:05:36 crc kubenswrapper[4835]: E0201 08:05:36.568917 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:05:40 crc kubenswrapper[4835]: I0201 08:05:40.568247 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:05:40 crc kubenswrapper[4835]: I0201 08:05:40.568818 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:05:40 crc kubenswrapper[4835]: E0201 08:05:40.569382 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:05:42 crc kubenswrapper[4835]: I0201 08:05:42.567589 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:05:42 crc kubenswrapper[4835]: I0201 08:05:42.567643 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:05:42 crc kubenswrapper[4835]: E0201 08:05:42.568129 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:05:44 crc kubenswrapper[4835]: I0201 08:05:44.567748 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:05:44 crc kubenswrapper[4835]: I0201 08:05:44.568200 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:05:44 crc kubenswrapper[4835]: I0201 08:05:44.568397 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:05:44 crc kubenswrapper[4835]: E0201 08:05:44.568957 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:05:49 crc kubenswrapper[4835]: I0201 08:05:49.567664 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:05:49 crc kubenswrapper[4835]: I0201 08:05:49.568634 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:05:49 crc kubenswrapper[4835]: I0201 08:05:49.568685 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:05:49 crc kubenswrapper[4835]: I0201 08:05:49.568811 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:05:49 crc kubenswrapper[4835]: I0201 08:05:49.568876 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:05:49 crc kubenswrapper[4835]: E0201 08:05:49.569566 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:50 crc kubenswrapper[4835]: I0201 08:05:50.566958 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:05:50 crc kubenswrapper[4835]: I0201 08:05:50.567051 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:05:50 crc kubenswrapper[4835]: I0201 08:05:50.567080 4835 scope.go:117] "RemoveContainer" containerID="8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d" Feb 01 08:05:50 crc kubenswrapper[4835]: I0201 08:05:50.567148 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:05:50 crc kubenswrapper[4835]: E0201 08:05:50.773937 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:05:51 crc kubenswrapper[4835]: I0201 08:05:51.160458 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e"} Feb 01 08:05:51 crc kubenswrapper[4835]: I0201 08:05:51.161697 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:05:51 crc kubenswrapper[4835]: I0201 08:05:51.161792 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:05:51 crc kubenswrapper[4835]: I0201 08:05:51.161924 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:05:51 crc kubenswrapper[4835]: E0201 08:05:51.162364 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:05:51 crc kubenswrapper[4835]: I0201 08:05:51.567339 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:05:51 crc kubenswrapper[4835]: I0201 08:05:51.567398 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:05:51 crc kubenswrapper[4835]: E0201 08:05:51.567990 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:05:56 crc kubenswrapper[4835]: I0201 08:05:56.567626 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:05:56 crc kubenswrapper[4835]: I0201 08:05:56.568017 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:05:56 crc kubenswrapper[4835]: E0201 08:05:56.568484 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:05:57 crc kubenswrapper[4835]: I0201 08:05:57.573047 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:05:57 crc kubenswrapper[4835]: I0201 08:05:57.573147 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:05:57 crc kubenswrapper[4835]: I0201 08:05:57.573275 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:05:57 crc kubenswrapper[4835]: E0201 08:05:57.573647 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:03 crc kubenswrapper[4835]: I0201 08:06:03.567216 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:06:03 crc kubenswrapper[4835]: I0201 08:06:03.568000 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:06:03 crc kubenswrapper[4835]: E0201 08:06:03.568399 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:06:03 crc kubenswrapper[4835]: I0201 08:06:03.568490 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:06:03 crc kubenswrapper[4835]: I0201 08:06:03.568594 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:06:03 crc kubenswrapper[4835]: I0201 08:06:03.568626 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:06:03 crc kubenswrapper[4835]: I0201 08:06:03.568713 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:06:03 crc kubenswrapper[4835]: I0201 08:06:03.568762 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:06:03 crc kubenswrapper[4835]: E0201 08:06:03.569155 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:06:04 crc kubenswrapper[4835]: I0201 08:06:04.568318 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:06:04 crc kubenswrapper[4835]: I0201 08:06:04.568543 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:06:04 crc kubenswrapper[4835]: I0201 08:06:04.568775 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:06:04 crc kubenswrapper[4835]: E0201 08:06:04.569464 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.339522 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" exitCode=1 Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.339637 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd"} Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.340385 4835 scope.go:117] "RemoveContainer" containerID="119c4ce439526ad42a1eff794697c49a5fd68c0530ba39ed7782d5829e417565" Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.341259 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.341456 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.341639 4835 scope.go:117] "RemoveContainer" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.341687 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:06:09 crc kubenswrapper[4835]: E0201 08:06:09.342496 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.567545 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.567733 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:09 crc kubenswrapper[4835]: E0201 08:06:09.568083 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.418195 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="b617a357ad18b022ef2b099085b4201aaae89a1fe136b06e63fb522686c13160" exitCode=1 Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.418275 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"b617a357ad18b022ef2b099085b4201aaae89a1fe136b06e63fb522686c13160"} Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.418884 4835 scope.go:117] "RemoveContainer" containerID="811dcfbbfbce2457a26cf2cfd3d7f241f223d0bd48897b5e6e54984050426b01" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.420034 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.420190 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.420257 4835 scope.go:117] "RemoveContainer" containerID="b617a357ad18b022ef2b099085b4201aaae89a1fe136b06e63fb522686c13160" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.420466 4835 scope.go:117] "RemoveContainer" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.420490 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:06:15 crc kubenswrapper[4835]: E0201 08:06:15.421251 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.567318 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.567370 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:06:15 crc kubenswrapper[4835]: E0201 08:06:15.567900 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:06:16 crc kubenswrapper[4835]: I0201 08:06:16.567634 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:06:16 crc kubenswrapper[4835]: I0201 08:06:16.568004 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:06:16 crc kubenswrapper[4835]: I0201 08:06:16.568122 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:06:16 crc kubenswrapper[4835]: E0201 08:06:16.568446 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:06:18 crc kubenswrapper[4835]: I0201 08:06:18.568362 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:06:18 crc kubenswrapper[4835]: I0201 08:06:18.568535 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:06:18 crc kubenswrapper[4835]: I0201 08:06:18.568585 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:06:18 crc kubenswrapper[4835]: I0201 08:06:18.568715 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:06:18 crc kubenswrapper[4835]: I0201 08:06:18.568805 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:06:18 crc kubenswrapper[4835]: E0201 08:06:18.569468 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:06:22 crc kubenswrapper[4835]: I0201 08:06:22.566887 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:06:22 crc kubenswrapper[4835]: I0201 08:06:22.567237 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:22 crc kubenswrapper[4835]: E0201 08:06:22.778577 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:23 crc kubenswrapper[4835]: I0201 08:06:23.504192 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"3dc281f01a9ac16c0c33d02b21534ea95495ca1e657991f992efda8792bd3fb4"} Feb 01 08:06:23 crc kubenswrapper[4835]: I0201 08:06:23.504452 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:06:23 crc kubenswrapper[4835]: I0201 08:06:23.504800 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:23 crc kubenswrapper[4835]: E0201 08:06:23.505135 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:24 crc kubenswrapper[4835]: I0201 08:06:24.515214 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:24 crc kubenswrapper[4835]: E0201 08:06:24.515972 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:25 crc kubenswrapper[4835]: I0201 08:06:25.192285 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 08:06:25 crc kubenswrapper[4835]: I0201 08:06:25.192371 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 08:06:28 crc kubenswrapper[4835]: I0201 08:06:28.023298 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:29 crc kubenswrapper[4835]: I0201 08:06:29.852458 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:06:29 crc kubenswrapper[4835]: I0201 08:06:29.852497 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:06:29 crc kubenswrapper[4835]: E0201 08:06:29.862735 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.020856 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.568402 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.568612 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.568676 4835 scope.go:117] "RemoveContainer" containerID="b617a357ad18b022ef2b099085b4201aaae89a1fe136b06e63fb522686c13160" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.568818 4835 scope.go:117] "RemoveContainer" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.568834 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.569452 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.569619 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.569866 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:06:30 crc kubenswrapper[4835]: E0201 08:06:30.570596 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:06:30 crc kubenswrapper[4835]: E0201 08:06:30.779447 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.890880 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"92e19c163e2de72bfddfab94aa60f51bee78d43c0a21f8ad5a34915b58f7acf3"} Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.891788 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.891874 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.894666 4835 scope.go:117] "RemoveContainer" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.894703 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:06:30 crc kubenswrapper[4835]: E0201 08:06:30.896217 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:31 crc kubenswrapper[4835]: I0201 08:06:31.021329 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:33 crc kubenswrapper[4835]: I0201 08:06:33.568787 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:06:33 crc kubenswrapper[4835]: I0201 08:06:33.569807 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:06:33 crc kubenswrapper[4835]: I0201 08:06:33.569875 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:06:33 crc kubenswrapper[4835]: I0201 08:06:33.570041 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:06:33 crc kubenswrapper[4835]: I0201 08:06:33.570129 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:06:33 crc kubenswrapper[4835]: E0201 08:06:33.571038 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.022647 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.022770 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.023706 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"3dc281f01a9ac16c0c33d02b21534ea95495ca1e657991f992efda8792bd3fb4"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.023749 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.023796 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://3dc281f01a9ac16c0c33d02b21534ea95495ca1e657991f992efda8792bd3fb4" gracePeriod=30 Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.028291 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:34 crc kubenswrapper[4835]: E0201 08:06:34.323773 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.927339 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="3dc281f01a9ac16c0c33d02b21534ea95495ca1e657991f992efda8792bd3fb4" exitCode=0 Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.927400 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"3dc281f01a9ac16c0c33d02b21534ea95495ca1e657991f992efda8792bd3fb4"} Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.927739 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140"} Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.927761 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.927917 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.928281 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:34 crc kubenswrapper[4835]: E0201 08:06:34.928521 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:35 crc kubenswrapper[4835]: I0201 08:06:35.940914 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:35 crc kubenswrapper[4835]: E0201 08:06:35.941129 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:40 crc kubenswrapper[4835]: I0201 08:06:40.023292 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:40 crc kubenswrapper[4835]: I0201 08:06:40.023354 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.021669 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.567644 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.567724 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.567829 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:06:43 crc kubenswrapper[4835]: E0201 08:06:43.568122 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.568272 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.568356 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.568527 4835 scope.go:117] "RemoveContainer" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.568539 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:06:43 crc kubenswrapper[4835]: E0201 08:06:43.568987 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:44 crc kubenswrapper[4835]: I0201 08:06:44.567247 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:06:44 crc kubenswrapper[4835]: I0201 08:06:44.568497 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:06:44 crc kubenswrapper[4835]: E0201 08:06:44.568745 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:06:45 crc kubenswrapper[4835]: I0201 08:06:45.021314 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:46 crc kubenswrapper[4835]: I0201 08:06:46.021270 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:46 crc kubenswrapper[4835]: I0201 08:06:46.022188 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:06:46 crc kubenswrapper[4835]: I0201 08:06:46.023096 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:06:46 crc kubenswrapper[4835]: I0201 08:06:46.023256 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:46 crc kubenswrapper[4835]: I0201 08:06:46.023404 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" gracePeriod=30 Feb 01 08:06:46 crc kubenswrapper[4835]: I0201 08:06:46.025373 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:46 crc kubenswrapper[4835]: E0201 08:06:46.238593 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.062724 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" exitCode=0 Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.062842 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140"} Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.062991 4835 scope.go:117] "RemoveContainer" containerID="3dc281f01a9ac16c0c33d02b21534ea95495ca1e657991f992efda8792bd3fb4" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.063981 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.064026 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:47 crc kubenswrapper[4835]: E0201 08:06:47.064399 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.576280 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.576735 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.576768 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.576855 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.576901 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:06:47 crc kubenswrapper[4835]: E0201 08:06:47.577318 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.146956 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" exitCode=1 Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.147001 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf"} Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.147397 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.148388 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.148540 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.148585 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.148678 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.148710 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.148776 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:06:54 crc kubenswrapper[4835]: E0201 08:06:54.149321 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:06:55 crc kubenswrapper[4835]: I0201 08:06:55.191924 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 08:06:55 crc kubenswrapper[4835]: I0201 08:06:55.191994 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 08:06:57 crc kubenswrapper[4835]: I0201 08:06:57.577649 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:06:57 crc kubenswrapper[4835]: I0201 08:06:57.578146 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:06:57 crc kubenswrapper[4835]: I0201 08:06:57.578229 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:06:57 crc kubenswrapper[4835]: I0201 08:06:57.578315 4835 scope.go:117] "RemoveContainer" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" Feb 01 08:06:57 crc kubenswrapper[4835]: I0201 08:06:57.578333 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:06:57 crc kubenswrapper[4835]: I0201 08:06:57.578552 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:06:57 crc kubenswrapper[4835]: I0201 08:06:57.578776 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:06:57 crc kubenswrapper[4835]: E0201 08:06:57.579278 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:06:57 crc kubenswrapper[4835]: E0201 08:06:57.785037 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:58 crc kubenswrapper[4835]: I0201 08:06:58.218159 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f"} Feb 01 08:06:58 crc kubenswrapper[4835]: I0201 08:06:58.219249 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:06:58 crc kubenswrapper[4835]: I0201 08:06:58.219369 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:06:58 crc kubenswrapper[4835]: I0201 08:06:58.219587 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:06:58 crc kubenswrapper[4835]: E0201 08:06:58.220163 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:59 crc kubenswrapper[4835]: I0201 08:06:59.568024 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:06:59 crc kubenswrapper[4835]: I0201 08:06:59.568086 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:06:59 crc kubenswrapper[4835]: I0201 08:06:59.568154 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:06:59 crc kubenswrapper[4835]: I0201 08:06:59.568179 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:59 crc kubenswrapper[4835]: E0201 08:06:59.568683 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:06:59 crc kubenswrapper[4835]: E0201 08:06:59.568728 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:07:01 crc kubenswrapper[4835]: I0201 08:07:01.488666 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:07:01 crc kubenswrapper[4835]: E0201 08:07:01.488934 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:07:01 crc kubenswrapper[4835]: E0201 08:07:01.489358 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:09:03.489330546 +0000 UTC m=+2816.609767020 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:07:09 crc kubenswrapper[4835]: I0201 08:07:09.568112 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:07:09 crc kubenswrapper[4835]: I0201 08:07:09.569612 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:07:09 crc kubenswrapper[4835]: I0201 08:07:09.569649 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:07:09 crc kubenswrapper[4835]: I0201 08:07:09.569702 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:07:09 crc kubenswrapper[4835]: I0201 08:07:09.569710 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:07:09 crc kubenswrapper[4835]: I0201 08:07:09.569749 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:07:09 crc kubenswrapper[4835]: E0201 08:07:09.570124 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.566450 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.566494 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.566790 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.566934 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.567189 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.567321 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:07:10 crc kubenswrapper[4835]: E0201 08:07:10.566923 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.567503 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.567730 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:07:10 crc kubenswrapper[4835]: E0201 08:07:10.567750 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:07:10 crc kubenswrapper[4835]: E0201 08:07:10.568355 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:07:11 crc kubenswrapper[4835]: I0201 08:07:11.567593 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:07:11 crc kubenswrapper[4835]: I0201 08:07:11.568067 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:07:11 crc kubenswrapper[4835]: E0201 08:07:11.568626 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:07:14 crc kubenswrapper[4835]: E0201 08:07:14.732737 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:07:15 crc kubenswrapper[4835]: I0201 08:07:15.402906 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:07:20 crc kubenswrapper[4835]: I0201 08:07:20.566980 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:07:20 crc kubenswrapper[4835]: I0201 08:07:20.567109 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:07:20 crc kubenswrapper[4835]: I0201 08:07:20.567153 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:07:20 crc kubenswrapper[4835]: I0201 08:07:20.567398 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:07:20 crc kubenswrapper[4835]: I0201 08:07:20.567417 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:07:20 crc kubenswrapper[4835]: I0201 08:07:20.567509 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:07:20 crc kubenswrapper[4835]: E0201 08:07:20.568180 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:07:21 crc kubenswrapper[4835]: I0201 08:07:21.567034 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:07:21 crc kubenswrapper[4835]: I0201 08:07:21.567358 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:21 crc kubenswrapper[4835]: E0201 08:07:21.737225 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:22 crc kubenswrapper[4835]: I0201 08:07:22.474404 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"e912da0c0e046b602987517a0d1588b0f8b6e72a8848b6c5352c65880ccfe5af"} Feb 01 08:07:22 crc kubenswrapper[4835]: I0201 08:07:22.475215 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:22 crc kubenswrapper[4835]: E0201 08:07:22.475564 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:22 crc kubenswrapper[4835]: I0201 08:07:22.475811 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:07:22 crc kubenswrapper[4835]: I0201 08:07:22.567632 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:07:22 crc kubenswrapper[4835]: I0201 08:07:22.567689 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:07:22 crc kubenswrapper[4835]: E0201 08:07:22.568071 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:07:23 crc kubenswrapper[4835]: I0201 08:07:23.483393 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:23 crc kubenswrapper[4835]: E0201 08:07:23.484096 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:23 crc kubenswrapper[4835]: I0201 08:07:23.567641 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:07:23 crc kubenswrapper[4835]: I0201 08:07:23.567775 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:07:23 crc kubenswrapper[4835]: I0201 08:07:23.567957 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:07:23 crc kubenswrapper[4835]: E0201 08:07:23.568451 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.191277 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.191335 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.191376 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.191961 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"946bdd545dcf0243e8d2cbdd7bcdfb0181a2c4c626eff76dbf1ecf3e068ec549"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.192043 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://946bdd545dcf0243e8d2cbdd7bcdfb0181a2c4c626eff76dbf1ecf3e068ec549" gracePeriod=600 Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.504533 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="946bdd545dcf0243e8d2cbdd7bcdfb0181a2c4c626eff76dbf1ecf3e068ec549" exitCode=0 Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.504598 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"946bdd545dcf0243e8d2cbdd7bcdfb0181a2c4c626eff76dbf1ecf3e068ec549"} Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.504698 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.568293 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.568458 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.568847 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:07:25 crc kubenswrapper[4835]: E0201 08:07:25.569859 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:07:26 crc kubenswrapper[4835]: I0201 08:07:26.515794 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df"} Feb 01 08:07:27 crc kubenswrapper[4835]: I0201 08:07:27.539572 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:27 crc kubenswrapper[4835]: I0201 08:07:27.539929 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:30 crc kubenswrapper[4835]: I0201 08:07:30.539108 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:31 crc kubenswrapper[4835]: I0201 08:07:31.566923 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:07:31 crc kubenswrapper[4835]: I0201 08:07:31.566992 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:07:31 crc kubenswrapper[4835]: I0201 08:07:31.567013 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:07:31 crc kubenswrapper[4835]: I0201 08:07:31.567059 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:07:31 crc kubenswrapper[4835]: I0201 08:07:31.567066 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:07:31 crc kubenswrapper[4835]: I0201 08:07:31.567099 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:07:31 crc kubenswrapper[4835]: E0201 08:07:31.567380 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:07:32 crc kubenswrapper[4835]: I0201 08:07:32.537655 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.538777 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.538884 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.539661 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"e912da0c0e046b602987517a0d1588b0f8b6e72a8848b6c5352c65880ccfe5af"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.539687 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.539749 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://e912da0c0e046b602987517a0d1588b0f8b6e72a8848b6c5352c65880ccfe5af" gracePeriod=30 Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.540446 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.566593 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.566646 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:07:33 crc kubenswrapper[4835]: E0201 08:07:33.567033 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:07:33 crc kubenswrapper[4835]: E0201 08:07:33.902323 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:34 crc kubenswrapper[4835]: I0201 08:07:34.586719 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="e912da0c0e046b602987517a0d1588b0f8b6e72a8848b6c5352c65880ccfe5af" exitCode=0 Feb 01 08:07:34 crc kubenswrapper[4835]: I0201 08:07:34.586939 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"e912da0c0e046b602987517a0d1588b0f8b6e72a8848b6c5352c65880ccfe5af"} Feb 01 08:07:34 crc kubenswrapper[4835]: I0201 08:07:34.587292 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8"} Feb 01 08:07:34 crc kubenswrapper[4835]: I0201 08:07:34.587336 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:07:34 crc kubenswrapper[4835]: I0201 08:07:34.588058 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:07:34 crc kubenswrapper[4835]: I0201 08:07:34.588497 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:34 crc kubenswrapper[4835]: E0201 08:07:34.588870 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:35 crc kubenswrapper[4835]: I0201 08:07:35.601645 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:35 crc kubenswrapper[4835]: E0201 08:07:35.601965 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:36 crc kubenswrapper[4835]: I0201 08:07:36.568363 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:07:36 crc kubenswrapper[4835]: I0201 08:07:36.568940 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:07:36 crc kubenswrapper[4835]: I0201 08:07:36.569135 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:07:36 crc kubenswrapper[4835]: I0201 08:07:36.569392 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:07:36 crc kubenswrapper[4835]: I0201 08:07:36.569687 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:07:36 crc kubenswrapper[4835]: E0201 08:07:36.569706 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:07:36 crc kubenswrapper[4835]: I0201 08:07:36.570166 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:07:36 crc kubenswrapper[4835]: E0201 08:07:36.570810 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:07:39 crc kubenswrapper[4835]: I0201 08:07:39.538325 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.537238 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.537939 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.567691 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.567857 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.567922 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.568062 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.568084 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.568183 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:07:42 crc kubenswrapper[4835]: E0201 08:07:42.569015 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.537925 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.538893 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.540213 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.540251 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.540307 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" gracePeriod=30 Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.541318 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:45 crc kubenswrapper[4835]: E0201 08:07:45.667809 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.700014 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" exitCode=0 Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.700088 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8"} Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.700142 4835 scope.go:117] "RemoveContainer" containerID="e912da0c0e046b602987517a0d1588b0f8b6e72a8848b6c5352c65880ccfe5af" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.701182 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.701237 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:45 crc kubenswrapper[4835]: E0201 08:07:45.701688 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:47 crc kubenswrapper[4835]: I0201 08:07:47.572402 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:07:47 crc kubenswrapper[4835]: I0201 08:07:47.572775 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:07:47 crc kubenswrapper[4835]: E0201 08:07:47.573052 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:07:47 crc kubenswrapper[4835]: I0201 08:07:47.573081 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:07:47 crc kubenswrapper[4835]: I0201 08:07:47.573199 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:07:47 crc kubenswrapper[4835]: I0201 08:07:47.573392 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:07:47 crc kubenswrapper[4835]: E0201 08:07:47.573904 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:07:51 crc kubenswrapper[4835]: I0201 08:07:51.568309 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:07:51 crc kubenswrapper[4835]: I0201 08:07:51.569840 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:07:51 crc kubenswrapper[4835]: I0201 08:07:51.570231 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:07:51 crc kubenswrapper[4835]: E0201 08:07:51.571125 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:07:53 crc kubenswrapper[4835]: I0201 08:07:53.567350 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:07:53 crc kubenswrapper[4835]: I0201 08:07:53.567754 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:07:53 crc kubenswrapper[4835]: I0201 08:07:53.567783 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:07:53 crc kubenswrapper[4835]: I0201 08:07:53.567872 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:07:53 crc kubenswrapper[4835]: I0201 08:07:53.567881 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:07:53 crc kubenswrapper[4835]: I0201 08:07:53.567929 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:07:53 crc kubenswrapper[4835]: E0201 08:07:53.568299 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:07:56 crc kubenswrapper[4835]: I0201 08:07:56.566824 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:07:56 crc kubenswrapper[4835]: I0201 08:07:56.567169 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:56 crc kubenswrapper[4835]: E0201 08:07:56.567594 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:58 crc kubenswrapper[4835]: I0201 08:07:58.568070 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:07:58 crc kubenswrapper[4835]: I0201 08:07:58.568240 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:07:58 crc kubenswrapper[4835]: I0201 08:07:58.568489 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:07:58 crc kubenswrapper[4835]: E0201 08:07:58.568978 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:07:59 crc kubenswrapper[4835]: I0201 08:07:59.566492 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:07:59 crc kubenswrapper[4835]: I0201 08:07:59.566799 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:07:59 crc kubenswrapper[4835]: E0201 08:07:59.567153 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:08:04 crc kubenswrapper[4835]: I0201 08:08:04.567098 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:08:04 crc kubenswrapper[4835]: I0201 08:08:04.567775 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:08:04 crc kubenswrapper[4835]: I0201 08:08:04.567912 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:08:04 crc kubenswrapper[4835]: E0201 08:08:04.568302 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:08:07 crc kubenswrapper[4835]: I0201 08:08:07.577856 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:08:07 crc kubenswrapper[4835]: I0201 08:08:07.578400 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:08:07 crc kubenswrapper[4835]: I0201 08:08:07.578616 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:08:07 crc kubenswrapper[4835]: I0201 08:08:07.578742 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:08:07 crc kubenswrapper[4835]: I0201 08:08:07.578762 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:08:07 crc kubenswrapper[4835]: I0201 08:08:07.578913 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:08:07 crc kubenswrapper[4835]: E0201 08:08:07.579983 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.568106 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.568277 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.568492 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:08:09 crc kubenswrapper[4835]: E0201 08:08:09.568992 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.949808 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="92e19c163e2de72bfddfab94aa60f51bee78d43c0a21f8ad5a34915b58f7acf3" exitCode=1 Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.949880 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"92e19c163e2de72bfddfab94aa60f51bee78d43c0a21f8ad5a34915b58f7acf3"} Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.949948 4835 scope.go:117] "RemoveContainer" containerID="b617a357ad18b022ef2b099085b4201aaae89a1fe136b06e63fb522686c13160" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.951013 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.951131 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.951176 4835 scope.go:117] "RemoveContainer" containerID="92e19c163e2de72bfddfab94aa60f51bee78d43c0a21f8ad5a34915b58f7acf3" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.951313 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:08:09 crc kubenswrapper[4835]: E0201 08:08:09.951964 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:08:10 crc kubenswrapper[4835]: I0201 08:08:10.567241 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:08:10 crc kubenswrapper[4835]: I0201 08:08:10.567268 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:08:10 crc kubenswrapper[4835]: E0201 08:08:10.567488 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:08:11 crc kubenswrapper[4835]: I0201 08:08:11.567276 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:08:11 crc kubenswrapper[4835]: I0201 08:08:11.567318 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:08:11 crc kubenswrapper[4835]: E0201 08:08:11.567803 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:08:18 crc kubenswrapper[4835]: I0201 08:08:18.567744 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:08:18 crc kubenswrapper[4835]: I0201 08:08:18.568526 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:08:18 crc kubenswrapper[4835]: I0201 08:08:18.568704 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:08:18 crc kubenswrapper[4835]: E0201 08:08:18.569298 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:08:21 crc kubenswrapper[4835]: I0201 08:08:21.567554 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:08:21 crc kubenswrapper[4835]: I0201 08:08:21.569449 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:08:21 crc kubenswrapper[4835]: E0201 08:08:21.570136 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.567864 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.567905 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:08:22 crc kubenswrapper[4835]: E0201 08:08:22.568259 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.568401 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.568576 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.568622 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.568719 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.568733 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.568796 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:08:22 crc kubenswrapper[4835]: E0201 08:08:22.569346 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:08:23 crc kubenswrapper[4835]: I0201 08:08:23.567820 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:08:23 crc kubenswrapper[4835]: I0201 08:08:23.568295 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:08:23 crc kubenswrapper[4835]: I0201 08:08:23.568340 4835 scope.go:117] "RemoveContainer" containerID="92e19c163e2de72bfddfab94aa60f51bee78d43c0a21f8ad5a34915b58f7acf3" Feb 01 08:08:23 crc kubenswrapper[4835]: I0201 08:08:23.568489 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:08:23 crc kubenswrapper[4835]: E0201 08:08:23.569022 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:08:31 crc kubenswrapper[4835]: I0201 08:08:31.568306 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:08:31 crc kubenswrapper[4835]: I0201 08:08:31.569477 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:08:31 crc kubenswrapper[4835]: I0201 08:08:31.569665 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:08:31 crc kubenswrapper[4835]: E0201 08:08:31.570152 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:08:33 crc kubenswrapper[4835]: I0201 08:08:33.567171 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:08:33 crc kubenswrapper[4835]: I0201 08:08:33.567950 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:08:33 crc kubenswrapper[4835]: I0201 08:08:33.567996 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:08:33 crc kubenswrapper[4835]: I0201 08:08:33.568113 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:08:33 crc kubenswrapper[4835]: I0201 08:08:33.568133 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:08:33 crc kubenswrapper[4835]: I0201 08:08:33.568219 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:08:33 crc kubenswrapper[4835]: E0201 08:08:33.568840 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:08:34 crc kubenswrapper[4835]: I0201 08:08:34.567401 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:08:34 crc kubenswrapper[4835]: I0201 08:08:34.567475 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:08:34 crc kubenswrapper[4835]: E0201 08:08:34.567713 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:08:35 crc kubenswrapper[4835]: I0201 08:08:35.567632 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:08:35 crc kubenswrapper[4835]: I0201 08:08:35.567689 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:08:35 crc kubenswrapper[4835]: E0201 08:08:35.568286 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:08:37 crc kubenswrapper[4835]: I0201 08:08:37.595136 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:08:37 crc kubenswrapper[4835]: I0201 08:08:37.595795 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:08:37 crc kubenswrapper[4835]: I0201 08:08:37.595845 4835 scope.go:117] "RemoveContainer" containerID="92e19c163e2de72bfddfab94aa60f51bee78d43c0a21f8ad5a34915b58f7acf3" Feb 01 08:08:37 crc kubenswrapper[4835]: I0201 08:08:37.595968 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:08:37 crc kubenswrapper[4835]: E0201 08:08:37.839976 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:08:38 crc kubenswrapper[4835]: I0201 08:08:38.226090 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e"} Feb 01 08:08:38 crc kubenswrapper[4835]: I0201 08:08:38.227064 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:08:38 crc kubenswrapper[4835]: I0201 08:08:38.227181 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:08:38 crc kubenswrapper[4835]: I0201 08:08:38.227390 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:08:38 crc kubenswrapper[4835]: E0201 08:08:38.227975 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567099 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567455 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:08:45 crc kubenswrapper[4835]: E0201 08:08:45.567648 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567666 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567727 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567814 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567880 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567960 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567990 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.568056 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.568065 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:08:45 crc kubenswrapper[4835]: E0201 08:08:45.568079 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.568111 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:08:45 crc kubenswrapper[4835]: E0201 08:08:45.568506 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:08:49 crc kubenswrapper[4835]: I0201 08:08:49.572944 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:08:49 crc kubenswrapper[4835]: I0201 08:08:49.573612 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:08:49 crc kubenswrapper[4835]: E0201 08:08:49.573800 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:08:50 crc kubenswrapper[4835]: I0201 08:08:50.568504 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:08:50 crc kubenswrapper[4835]: I0201 08:08:50.568584 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:08:50 crc kubenswrapper[4835]: I0201 08:08:50.568692 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:08:50 crc kubenswrapper[4835]: E0201 08:08:50.569025 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:08:53 crc kubenswrapper[4835]: I0201 08:08:53.397028 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" exitCode=1 Feb 01 08:08:53 crc kubenswrapper[4835]: I0201 08:08:53.397088 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f"} Feb 01 08:08:53 crc kubenswrapper[4835]: I0201 08:08:53.397382 4835 scope.go:117] "RemoveContainer" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" Feb 01 08:08:53 crc kubenswrapper[4835]: I0201 08:08:53.398091 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:08:53 crc kubenswrapper[4835]: I0201 08:08:53.398167 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:08:53 crc kubenswrapper[4835]: I0201 08:08:53.398263 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:08:53 crc kubenswrapper[4835]: I0201 08:08:53.398283 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:08:53 crc kubenswrapper[4835]: E0201 08:08:53.398630 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.355322 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vwdqc/must-gather-c7xxg"] Feb 01 08:08:54 crc kubenswrapper[4835]: E0201 08:08:54.355985 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="extract-content" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.356004 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="extract-content" Feb 01 08:08:54 crc kubenswrapper[4835]: E0201 08:08:54.356018 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="extract-utilities" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.356025 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="extract-utilities" Feb 01 08:08:54 crc kubenswrapper[4835]: E0201 08:08:54.356044 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="registry-server" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.356051 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="registry-server" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.356200 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="registry-server" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.357171 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.363684 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vwdqc/must-gather-c7xxg"] Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.367765 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vwdqc"/"default-dockercfg-k5f5r" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.371975 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vwdqc"/"openshift-service-ca.crt" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.372393 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vwdqc"/"kube-root-ca.crt" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.441840 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-must-gather-output\") pod \"must-gather-c7xxg\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.441940 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2blml\" (UniqueName: \"kubernetes.io/projected/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-kube-api-access-2blml\") pod \"must-gather-c7xxg\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.543228 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-must-gather-output\") pod \"must-gather-c7xxg\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.543635 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2blml\" (UniqueName: \"kubernetes.io/projected/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-kube-api-access-2blml\") pod \"must-gather-c7xxg\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.543740 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-must-gather-output\") pod \"must-gather-c7xxg\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.564350 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2blml\" (UniqueName: \"kubernetes.io/projected/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-kube-api-access-2blml\") pod \"must-gather-c7xxg\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.678240 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.921211 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vwdqc/must-gather-c7xxg"] Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.936992 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 08:08:55 crc kubenswrapper[4835]: I0201 08:08:55.419985 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" event={"ID":"dfdcbe67-d5e0-4882-b2d9-e039513a25f0","Type":"ContainerStarted","Data":"8aeae16b7dc6696e2eb129853e351d55daf6c70724a0e8dd2121c1356b4e3980"} Feb 01 08:08:56 crc kubenswrapper[4835]: I0201 08:08:56.566876 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:08:56 crc kubenswrapper[4835]: I0201 08:08:56.567265 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:08:56 crc kubenswrapper[4835]: I0201 08:08:56.567383 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:08:56 crc kubenswrapper[4835]: E0201 08:08:56.567768 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:08:57 crc kubenswrapper[4835]: I0201 08:08:57.577467 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:08:57 crc kubenswrapper[4835]: I0201 08:08:57.577552 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:08:57 crc kubenswrapper[4835]: I0201 08:08:57.577579 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:08:57 crc kubenswrapper[4835]: I0201 08:08:57.577635 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:08:57 crc kubenswrapper[4835]: I0201 08:08:57.577660 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:08:57 crc kubenswrapper[4835]: I0201 08:08:57.577702 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:08:57 crc kubenswrapper[4835]: E0201 08:08:57.578057 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.471026 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c" exitCode=1 Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.471119 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c"} Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.471586 4835 scope.go:117] "RemoveContainer" containerID="7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996" Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.473205 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.473383 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.473638 4835 scope.go:117] "RemoveContainer" containerID="3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c" Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.473899 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:08:59 crc kubenswrapper[4835]: E0201 08:08:59.475052 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.481194 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" event={"ID":"dfdcbe67-d5e0-4882-b2d9-e039513a25f0","Type":"ContainerStarted","Data":"275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7"} Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.481279 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" event={"ID":"dfdcbe67-d5e0-4882-b2d9-e039513a25f0","Type":"ContainerStarted","Data":"ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6"} Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.549694 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" podStartSLOduration=1.8787696569999999 podStartE2EDuration="5.549675495s" podCreationTimestamp="2026-02-01 08:08:54 +0000 UTC" firstStartedPulling="2026-02-01 08:08:54.93680214 +0000 UTC m=+2808.057238574" lastFinishedPulling="2026-02-01 08:08:58.607707928 +0000 UTC m=+2811.728144412" observedRunningTime="2026-02-01 08:08:59.538959646 +0000 UTC m=+2812.659396090" watchObservedRunningTime="2026-02-01 08:08:59.549675495 +0000 UTC m=+2812.670111949" Feb 01 08:09:00 crc kubenswrapper[4835]: I0201 08:09:00.567621 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:09:00 crc kubenswrapper[4835]: I0201 08:09:00.567666 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:09:00 crc kubenswrapper[4835]: E0201 08:09:00.567991 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:09:03 crc kubenswrapper[4835]: I0201 08:09:03.576138 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:09:03 crc kubenswrapper[4835]: E0201 08:09:03.576327 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:09:03 crc kubenswrapper[4835]: E0201 08:09:03.577074 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:11:05.577060748 +0000 UTC m=+2938.697497182 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:09:04 crc kubenswrapper[4835]: I0201 08:09:04.566629 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:04 crc kubenswrapper[4835]: I0201 08:09:04.567001 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:09:04 crc kubenswrapper[4835]: E0201 08:09:04.567333 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:05 crc kubenswrapper[4835]: I0201 08:09:05.568633 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:09:05 crc kubenswrapper[4835]: I0201 08:09:05.568736 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:09:05 crc kubenswrapper[4835]: I0201 08:09:05.568855 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:09:05 crc kubenswrapper[4835]: I0201 08:09:05.568869 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:09:05 crc kubenswrapper[4835]: E0201 08:09:05.569315 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:09:08 crc kubenswrapper[4835]: I0201 08:09:08.566641 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:09:08 crc kubenswrapper[4835]: I0201 08:09:08.566918 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:09:08 crc kubenswrapper[4835]: I0201 08:09:08.566938 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:09:08 crc kubenswrapper[4835]: I0201 08:09:08.566986 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:09:08 crc kubenswrapper[4835]: I0201 08:09:08.566994 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:09:08 crc kubenswrapper[4835]: I0201 08:09:08.567024 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:09:08 crc kubenswrapper[4835]: E0201 08:09:08.567332 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:09:14 crc kubenswrapper[4835]: I0201 08:09:14.567238 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:09:14 crc kubenswrapper[4835]: I0201 08:09:14.567816 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:09:14 crc kubenswrapper[4835]: I0201 08:09:14.567893 4835 scope.go:117] "RemoveContainer" containerID="3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c" Feb 01 08:09:14 crc kubenswrapper[4835]: I0201 08:09:14.567901 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:09:15 crc kubenswrapper[4835]: E0201 08:09:15.059137 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.567638 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.567697 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.567727 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.567756 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:09:15 crc kubenswrapper[4835]: E0201 08:09:15.567966 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:09:15 crc kubenswrapper[4835]: E0201 08:09:15.568090 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.650979 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" exitCode=1 Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.651050 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09"} Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.651091 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63"} Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.651107 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825"} Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.651061 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" exitCode=1 Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.651123 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.652226 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.652322 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.652451 4835 scope.go:117] "RemoveContainer" containerID="3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.652461 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:09:15 crc kubenswrapper[4835]: E0201 08:09:15.652837 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.716337 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:09:16 crc kubenswrapper[4835]: I0201 08:09:16.671263 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" exitCode=1 Feb 01 08:09:16 crc kubenswrapper[4835]: I0201 08:09:16.671285 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09"} Feb 01 08:09:16 crc kubenswrapper[4835]: I0201 08:09:16.671367 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:09:16 crc kubenswrapper[4835]: I0201 08:09:16.672476 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:09:16 crc kubenswrapper[4835]: I0201 08:09:16.672585 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:09:16 crc kubenswrapper[4835]: I0201 08:09:16.672721 4835 scope.go:117] "RemoveContainer" containerID="3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c" Feb 01 08:09:16 crc kubenswrapper[4835]: I0201 08:09:16.672737 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:09:16 crc kubenswrapper[4835]: E0201 08:09:16.673351 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:09:18 crc kubenswrapper[4835]: E0201 08:09:18.404353 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:09:18 crc kubenswrapper[4835]: I0201 08:09:18.567638 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:09:18 crc kubenswrapper[4835]: I0201 08:09:18.567715 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:09:18 crc kubenswrapper[4835]: I0201 08:09:18.567808 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:09:18 crc kubenswrapper[4835]: I0201 08:09:18.567817 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:09:18 crc kubenswrapper[4835]: E0201 08:09:18.568173 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:09:18 crc kubenswrapper[4835]: I0201 08:09:18.704510 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:09:21 crc kubenswrapper[4835]: I0201 08:09:21.567475 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:09:21 crc kubenswrapper[4835]: I0201 08:09:21.567860 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:09:21 crc kubenswrapper[4835]: I0201 08:09:21.567882 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:09:21 crc kubenswrapper[4835]: I0201 08:09:21.567929 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:09:21 crc kubenswrapper[4835]: I0201 08:09:21.567936 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:09:21 crc kubenswrapper[4835]: I0201 08:09:21.567981 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:09:21 crc kubenswrapper[4835]: E0201 08:09:21.568416 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:09:25 crc kubenswrapper[4835]: I0201 08:09:25.191719 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 08:09:25 crc kubenswrapper[4835]: I0201 08:09:25.192356 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.778486 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" exitCode=1 Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.778553 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e"} Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.779562 4835 scope.go:117] "RemoveContainer" containerID="92e19c163e2de72bfddfab94aa60f51bee78d43c0a21f8ad5a34915b58f7acf3" Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.780259 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.780340 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.780371 4835 scope.go:117] "RemoveContainer" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.780492 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.780506 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:09:28 crc kubenswrapper[4835]: E0201 08:09:28.293532 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.566981 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.567248 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.567337 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.567356 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:09:28 crc kubenswrapper[4835]: E0201 08:09:28.567465 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:28 crc kubenswrapper[4835]: E0201 08:09:28.567595 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.568988 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.569290 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.569678 4835 scope.go:117] "RemoveContainer" containerID="3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.569837 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:09:28 crc kubenswrapper[4835]: E0201 08:09:28.738948 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.795791 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28"} Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.796573 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.796649 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.796769 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:09:28 crc kubenswrapper[4835]: E0201 08:09:28.797185 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.807815 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" exitCode=1 Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.807861 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" exitCode=1 Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.807918 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3"} Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.808002 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df"} Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.808039 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766"} Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.808082 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.808864 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.808939 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.808966 4835 scope.go:117] "RemoveContainer" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.809011 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:09:28 crc kubenswrapper[4835]: E0201 08:09:28.809295 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.884286 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.821648 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" exitCode=1 Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.821838 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3"} Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.822037 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.822729 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.822825 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.822865 4835 scope.go:117] "RemoveContainer" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.822939 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.822949 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:09:29 crc kubenswrapper[4835]: E0201 08:09:29.823482 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:09:32 crc kubenswrapper[4835]: I0201 08:09:32.567675 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:09:32 crc kubenswrapper[4835]: I0201 08:09:32.568340 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:09:32 crc kubenswrapper[4835]: I0201 08:09:32.568368 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:09:32 crc kubenswrapper[4835]: I0201 08:09:32.568488 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:09:32 crc kubenswrapper[4835]: I0201 08:09:32.568498 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:09:32 crc kubenswrapper[4835]: I0201 08:09:32.568539 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:09:32 crc kubenswrapper[4835]: E0201 08:09:32.568888 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:09:39 crc kubenswrapper[4835]: I0201 08:09:39.573928 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:39 crc kubenswrapper[4835]: I0201 08:09:39.575449 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:09:39 crc kubenswrapper[4835]: E0201 08:09:39.575962 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.494347 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf_34b15f05-4416-4999-ba8c-3bc64ada7f04/util/0.log" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.566652 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.566681 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.566805 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.566857 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:09:43 crc kubenswrapper[4835]: E0201 08:09:43.566879 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.566960 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:09:43 crc kubenswrapper[4835]: E0201 08:09:43.567164 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.567560 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.567611 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.567633 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.567677 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.567685 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.567715 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:09:43 crc kubenswrapper[4835]: E0201 08:09:43.567979 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.568458 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.568516 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.568537 4835 scope.go:117] "RemoveContainer" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.568582 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.568589 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:09:43 crc kubenswrapper[4835]: E0201 08:09:43.568827 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.635465 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf_34b15f05-4416-4999-ba8c-3bc64ada7f04/pull/0.log" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.670866 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf_34b15f05-4416-4999-ba8c-3bc64ada7f04/util/0.log" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.672470 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf_34b15f05-4416-4999-ba8c-3bc64ada7f04/pull/0.log" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.877656 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf_34b15f05-4416-4999-ba8c-3bc64ada7f04/pull/0.log" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.884705 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf_34b15f05-4416-4999-ba8c-3bc64ada7f04/util/0.log" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.928907 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf_34b15f05-4416-4999-ba8c-3bc64ada7f04/extract/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.071024 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k_59f26b1b-b8b2-4479-8e35-a7a46c629d35/util/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.226152 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k_59f26b1b-b8b2-4479-8e35-a7a46c629d35/pull/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.281892 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k_59f26b1b-b8b2-4479-8e35-a7a46c629d35/util/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.294491 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k_59f26b1b-b8b2-4479-8e35-a7a46c629d35/pull/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.441436 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k_59f26b1b-b8b2-4479-8e35-a7a46c629d35/pull/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.477959 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k_59f26b1b-b8b2-4479-8e35-a7a46c629d35/extract/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.484659 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k_59f26b1b-b8b2-4479-8e35-a7a46c629d35/util/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.754382 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm_667e6752-afe4-4918-9457-57c5eb1a6aae/util/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.888173 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm_667e6752-afe4-4918-9457-57c5eb1a6aae/util/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.897622 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm_667e6752-afe4-4918-9457-57c5eb1a6aae/pull/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.912126 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm_667e6752-afe4-4918-9457-57c5eb1a6aae/pull/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.046826 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm_667e6752-afe4-4918-9457-57c5eb1a6aae/pull/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.077797 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm_667e6752-afe4-4918-9457-57c5eb1a6aae/util/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.078795 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm_667e6752-afe4-4918-9457-57c5eb1a6aae/extract/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.283011 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-index-fmwqp_4fa5ae77-daab-43fa-b798-b9895f717e0a/registry-server/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.463765 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4_4326f882-2be0-41a9-b71d-14e811ba9343/util/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.682098 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4_4326f882-2be0-41a9-b71d-14e811ba9343/pull/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.701341 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4_4326f882-2be0-41a9-b71d-14e811ba9343/pull/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.726389 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4_4326f882-2be0-41a9-b71d-14e811ba9343/util/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.897954 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4_4326f882-2be0-41a9-b71d-14e811ba9343/pull/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.936983 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4_4326f882-2be0-41a9-b71d-14e811ba9343/extract/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.987703 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4_4326f882-2be0-41a9-b71d-14e811ba9343/util/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.116244 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5_846fe1f2-f96b-4447-9336-d58ac094d486/util/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.329668 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5_846fe1f2-f96b-4447-9336-d58ac094d486/pull/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.414755 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5_846fe1f2-f96b-4447-9336-d58ac094d486/util/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.416941 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5_846fe1f2-f96b-4447-9336-d58ac094d486/pull/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.602767 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5_846fe1f2-f96b-4447-9336-d58ac094d486/extract/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.620031 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5_846fe1f2-f96b-4447-9336-d58ac094d486/util/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.647460 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5_846fe1f2-f96b-4447-9336-d58ac094d486/pull/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.805640 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d_147369ac-5553-4aa7-944b-878065951228/util/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.017687 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d_147369ac-5553-4aa7-944b-878065951228/util/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.019190 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d_147369ac-5553-4aa7-944b-878065951228/pull/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.055218 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d_147369ac-5553-4aa7-944b-878065951228/pull/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.244676 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d_147369ac-5553-4aa7-944b-878065951228/pull/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.251740 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d_147369ac-5553-4aa7-944b-878065951228/util/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.272683 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d_147369ac-5553-4aa7-944b-878065951228/extract/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.465497 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6f4d667fdd-rfzbv_aeafdd64-5ab8-429a-9411-bdfe3e0780af/manager/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.572769 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-index-x9r54_c754e3d7-d607-4427-b349-b5c22df261ec/registry-server/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.666035 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7ddb6bb5f-7x7n4_84eb5c79-bae7-43b3-9b04-c949dc8c5ec4/manager/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.805041 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-index-6hv5l_09002d70-8878-4f31-bc75-ddf7378a8564/registry-server/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.882558 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5fc7bf5575-vbqwd_73820432-e4ca-45a7-ae9c-77a538ce1d20/manager/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.940912 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-854bb59648-nqzs5_2562b9ca-8a8f-4a90-8e8f-fd3e4b235603/manager/0.log" Feb 01 08:09:48 crc kubenswrapper[4835]: I0201 08:09:48.062300 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-index-hgssn_bc494048-8b2c-4d2e-925e-8b1b779dab89/registry-server/0.log" Feb 01 08:09:48 crc kubenswrapper[4835]: I0201 08:09:48.081964 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-779fc9694b-fhcz9_b76bd603-252c-4c26-a1c7-0009be5661be/operator/0.log" Feb 01 08:09:48 crc kubenswrapper[4835]: I0201 08:09:48.170083 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-index-nztp8_be408dba-dcbf-40e4-9b83-cd67424ad82d/registry-server/0.log" Feb 01 08:09:48 crc kubenswrapper[4835]: I0201 08:09:48.294287 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7b5bf4689c-j4d4r_26de1ab5-eb0d-4fe4-83ad-25f2262bd958/manager/0.log" Feb 01 08:09:48 crc kubenswrapper[4835]: I0201 08:09:48.339190 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-index-tj2nn_ebf9c948-3fde-47f0-aa35-856193c1a275/registry-server/0.log" Feb 01 08:09:54 crc kubenswrapper[4835]: I0201 08:09:54.567267 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:54 crc kubenswrapper[4835]: I0201 08:09:54.567758 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:09:54 crc kubenswrapper[4835]: E0201 08:09:54.762113 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:55 crc kubenswrapper[4835]: I0201 08:09:55.192167 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 08:09:55 crc kubenswrapper[4835]: I0201 08:09:55.192229 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 08:09:55 crc kubenswrapper[4835]: I0201 08:09:55.695570 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92"} Feb 01 08:09:55 crc kubenswrapper[4835]: I0201 08:09:55.695849 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:09:55 crc kubenswrapper[4835]: I0201 08:09:55.696129 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:55 crc kubenswrapper[4835]: E0201 08:09:55.696301 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.567971 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.568333 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.568363 4835 scope.go:117] "RemoveContainer" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.568451 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.568460 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:09:56 crc kubenswrapper[4835]: E0201 08:09:56.568891 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.706896 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" exitCode=1 Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.706937 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92"} Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.706971 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.707701 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.707736 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:09:56 crc kubenswrapper[4835]: E0201 08:09:56.708106 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.535025 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.575208 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.575282 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.575386 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:09:57 crc kubenswrapper[4835]: E0201 08:09:57.575729 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.575855 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.575877 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.577900 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.577970 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.577998 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.578100 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.578115 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.578158 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:09:57 crc kubenswrapper[4835]: E0201 08:09:57.578520 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.720048 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.720073 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:09:57 crc kubenswrapper[4835]: E0201 08:09:57.720373 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:57 crc kubenswrapper[4835]: E0201 08:09:57.756305 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:09:58 crc kubenswrapper[4835]: I0201 08:09:58.733080 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" exitCode=1 Feb 01 08:09:58 crc kubenswrapper[4835]: I0201 08:09:58.733218 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716"} Feb 01 08:09:58 crc kubenswrapper[4835]: I0201 08:09:58.733324 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:09:58 crc kubenswrapper[4835]: I0201 08:09:58.733929 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:09:58 crc kubenswrapper[4835]: I0201 08:09:58.733958 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:09:58 crc kubenswrapper[4835]: I0201 08:09:58.734071 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:58 crc kubenswrapper[4835]: I0201 08:09:58.734103 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:09:58 crc kubenswrapper[4835]: E0201 08:09:58.734237 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:09:58 crc kubenswrapper[4835]: E0201 08:09:58.734849 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:10:00 crc kubenswrapper[4835]: I0201 08:10:00.019013 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:10:00 crc kubenswrapper[4835]: I0201 08:10:00.019745 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:10:00 crc kubenswrapper[4835]: I0201 08:10:00.019764 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:10:00 crc kubenswrapper[4835]: E0201 08:10:00.020027 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:10:01 crc kubenswrapper[4835]: I0201 08:10:01.019702 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:10:01 crc kubenswrapper[4835]: I0201 08:10:01.021765 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:10:01 crc kubenswrapper[4835]: I0201 08:10:01.021927 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:10:01 crc kubenswrapper[4835]: E0201 08:10:01.022358 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.340206 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w6xsk"] Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.342209 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.364225 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6xsk"] Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.420335 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-utilities\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.420459 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-catalog-content\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.420631 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kz84\" (UniqueName: \"kubernetes.io/projected/655042b7-c713-4116-b191-f8e9c03ac3b0-kube-api-access-6kz84\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.522283 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-utilities\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.522341 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-catalog-content\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.522386 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kz84\" (UniqueName: \"kubernetes.io/projected/655042b7-c713-4116-b191-f8e9c03ac3b0-kube-api-access-6kz84\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.522861 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-catalog-content\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.523096 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-utilities\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.540340 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kz84\" (UniqueName: \"kubernetes.io/projected/655042b7-c713-4116-b191-f8e9c03ac3b0-kube-api-access-6kz84\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.662429 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:03 crc kubenswrapper[4835]: I0201 08:10:03.150638 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6xsk"] Feb 01 08:10:03 crc kubenswrapper[4835]: I0201 08:10:03.783136 4835 generic.go:334] "Generic (PLEG): container finished" podID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerID="b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead" exitCode=0 Feb 01 08:10:03 crc kubenswrapper[4835]: I0201 08:10:03.783217 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsk" event={"ID":"655042b7-c713-4116-b191-f8e9c03ac3b0","Type":"ContainerDied","Data":"b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead"} Feb 01 08:10:03 crc kubenswrapper[4835]: I0201 08:10:03.783525 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsk" event={"ID":"655042b7-c713-4116-b191-f8e9c03ac3b0","Type":"ContainerStarted","Data":"83bf7050a3928eff21dd7ca58ac3da1b5dd3eefee3c5bca8bc925f193a4e6dc0"} Feb 01 08:10:04 crc kubenswrapper[4835]: I0201 08:10:04.362120 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-ngjw6_a67dd2fd-8463-4887-94b7-405df03c5c0a/control-plane-machine-set-operator/0.log" Feb 01 08:10:04 crc kubenswrapper[4835]: I0201 08:10:04.529820 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-whqd4_8924e4db-3c47-4e66-90d1-e74e49f3a65d/kube-rbac-proxy/0.log" Feb 01 08:10:04 crc kubenswrapper[4835]: I0201 08:10:04.550885 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-whqd4_8924e4db-3c47-4e66-90d1-e74e49f3a65d/machine-api-operator/0.log" Feb 01 08:10:04 crc kubenswrapper[4835]: I0201 08:10:04.791448 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsk" event={"ID":"655042b7-c713-4116-b191-f8e9c03ac3b0","Type":"ContainerStarted","Data":"19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452"} Feb 01 08:10:05 crc kubenswrapper[4835]: I0201 08:10:05.801482 4835 generic.go:334] "Generic (PLEG): container finished" podID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerID="19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452" exitCode=0 Feb 01 08:10:05 crc kubenswrapper[4835]: I0201 08:10:05.801518 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsk" event={"ID":"655042b7-c713-4116-b191-f8e9c03ac3b0","Type":"ContainerDied","Data":"19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452"} Feb 01 08:10:06 crc kubenswrapper[4835]: I0201 08:10:06.811190 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsk" event={"ID":"655042b7-c713-4116-b191-f8e9c03ac3b0","Type":"ContainerStarted","Data":"e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4"} Feb 01 08:10:06 crc kubenswrapper[4835]: I0201 08:10:06.828809 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w6xsk" podStartSLOduration=2.40124141 podStartE2EDuration="4.828790214s" podCreationTimestamp="2026-02-01 08:10:02 +0000 UTC" firstStartedPulling="2026-02-01 08:10:03.78481161 +0000 UTC m=+2876.905248044" lastFinishedPulling="2026-02-01 08:10:06.212360424 +0000 UTC m=+2879.332796848" observedRunningTime="2026-02-01 08:10:06.825849117 +0000 UTC m=+2879.946285551" watchObservedRunningTime="2026-02-01 08:10:06.828790214 +0000 UTC m=+2879.949226658" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.573430 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.574058 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.574149 4835 scope.go:117] "RemoveContainer" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.574280 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.574345 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:10:07 crc kubenswrapper[4835]: E0201 08:10:07.752835 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.824482 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1"} Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.825461 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.825518 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.825597 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.825610 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:10:07 crc kubenswrapper[4835]: E0201 08:10:07.825896 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:10:08 crc kubenswrapper[4835]: I0201 08:10:08.566829 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:10:08 crc kubenswrapper[4835]: I0201 08:10:08.567122 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:10:08 crc kubenswrapper[4835]: I0201 08:10:08.567146 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:10:08 crc kubenswrapper[4835]: I0201 08:10:08.567188 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:10:08 crc kubenswrapper[4835]: I0201 08:10:08.567195 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:10:08 crc kubenswrapper[4835]: I0201 08:10:08.567225 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:10:08 crc kubenswrapper[4835]: I0201 08:10:08.841387 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab"} Feb 01 08:10:09 crc kubenswrapper[4835]: E0201 08:10:09.251849 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.853769 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" exitCode=1 Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.853976 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" exitCode=1 Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.853986 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" exitCode=1 Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.853992 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" exitCode=1 Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.853990 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab"} Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.854037 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07"} Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.854049 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456"} Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.854060 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb"} Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.854076 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.861321 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.861568 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.861598 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.861644 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.861650 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.861687 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:10:09 crc kubenswrapper[4835]: E0201 08:10:09.861990 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.920775 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.998602 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.056261 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.867917 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" exitCode=1 Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.867993 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e"} Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.868211 4835 scope.go:117] "RemoveContainer" containerID="8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.868846 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.868897 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.868918 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.868986 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:10:10 crc kubenswrapper[4835]: E0201 08:10:10.869249 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.875881 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.875958 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.875979 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.876022 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.876029 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.876059 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:10:10 crc kubenswrapper[4835]: E0201 08:10:10.876498 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.567260 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.567626 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.567724 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.567750 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:10:12 crc kubenswrapper[4835]: E0201 08:10:12.567960 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:10:12 crc kubenswrapper[4835]: E0201 08:10:12.568001 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.663351 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.663652 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.711974 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.970956 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:13 crc kubenswrapper[4835]: I0201 08:10:13.018938 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6xsk"] Feb 01 08:10:14 crc kubenswrapper[4835]: I0201 08:10:14.917594 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w6xsk" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="registry-server" containerID="cri-o://e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4" gracePeriod=2 Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.306044 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.403499 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-catalog-content\") pod \"655042b7-c713-4116-b191-f8e9c03ac3b0\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.403671 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kz84\" (UniqueName: \"kubernetes.io/projected/655042b7-c713-4116-b191-f8e9c03ac3b0-kube-api-access-6kz84\") pod \"655042b7-c713-4116-b191-f8e9c03ac3b0\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.403716 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-utilities\") pod \"655042b7-c713-4116-b191-f8e9c03ac3b0\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.404625 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-utilities" (OuterVolumeSpecName: "utilities") pod "655042b7-c713-4116-b191-f8e9c03ac3b0" (UID: "655042b7-c713-4116-b191-f8e9c03ac3b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.409166 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/655042b7-c713-4116-b191-f8e9c03ac3b0-kube-api-access-6kz84" (OuterVolumeSpecName: "kube-api-access-6kz84") pod "655042b7-c713-4116-b191-f8e9c03ac3b0" (UID: "655042b7-c713-4116-b191-f8e9c03ac3b0"). InnerVolumeSpecName "kube-api-access-6kz84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.457041 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "655042b7-c713-4116-b191-f8e9c03ac3b0" (UID: "655042b7-c713-4116-b191-f8e9c03ac3b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.505639 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.505685 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kz84\" (UniqueName: \"kubernetes.io/projected/655042b7-c713-4116-b191-f8e9c03ac3b0-kube-api-access-6kz84\") on node \"crc\" DevicePath \"\"" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.505701 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.927154 4835 generic.go:334] "Generic (PLEG): container finished" podID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerID="e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4" exitCode=0 Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.927198 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsk" event={"ID":"655042b7-c713-4116-b191-f8e9c03ac3b0","Type":"ContainerDied","Data":"e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4"} Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.927206 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.927227 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsk" event={"ID":"655042b7-c713-4116-b191-f8e9c03ac3b0","Type":"ContainerDied","Data":"83bf7050a3928eff21dd7ca58ac3da1b5dd3eefee3c5bca8bc925f193a4e6dc0"} Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.927248 4835 scope.go:117] "RemoveContainer" containerID="e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.950694 4835 scope.go:117] "RemoveContainer" containerID="19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.956476 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6xsk"] Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.983157 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w6xsk"] Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.983339 4835 scope.go:117] "RemoveContainer" containerID="b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead" Feb 01 08:10:16 crc kubenswrapper[4835]: I0201 08:10:16.020104 4835 scope.go:117] "RemoveContainer" containerID="e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4" Feb 01 08:10:16 crc kubenswrapper[4835]: E0201 08:10:16.020555 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4\": container with ID starting with e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4 not found: ID does not exist" containerID="e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4" Feb 01 08:10:16 crc kubenswrapper[4835]: I0201 08:10:16.020595 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4"} err="failed to get container status \"e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4\": rpc error: code = NotFound desc = could not find container \"e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4\": container with ID starting with e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4 not found: ID does not exist" Feb 01 08:10:16 crc kubenswrapper[4835]: I0201 08:10:16.020623 4835 scope.go:117] "RemoveContainer" containerID="19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452" Feb 01 08:10:16 crc kubenswrapper[4835]: E0201 08:10:16.021046 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452\": container with ID starting with 19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452 not found: ID does not exist" containerID="19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452" Feb 01 08:10:16 crc kubenswrapper[4835]: I0201 08:10:16.021165 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452"} err="failed to get container status \"19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452\": rpc error: code = NotFound desc = could not find container \"19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452\": container with ID starting with 19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452 not found: ID does not exist" Feb 01 08:10:16 crc kubenswrapper[4835]: I0201 08:10:16.021238 4835 scope.go:117] "RemoveContainer" containerID="b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead" Feb 01 08:10:16 crc kubenswrapper[4835]: E0201 08:10:16.021807 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead\": container with ID starting with b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead not found: ID does not exist" containerID="b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead" Feb 01 08:10:16 crc kubenswrapper[4835]: I0201 08:10:16.021886 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead"} err="failed to get container status \"b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead\": rpc error: code = NotFound desc = could not find container \"b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead\": container with ID starting with b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead not found: ID does not exist" Feb 01 08:10:17 crc kubenswrapper[4835]: I0201 08:10:17.578211 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" path="/var/lib/kubelet/pods/655042b7-c713-4116-b191-f8e9c03ac3b0/volumes" Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.567593 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.568431 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.568540 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.568550 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:10:20 crc kubenswrapper[4835]: E0201 08:10:20.781268 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.968308 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"4c18a6c0ad7fc9f3254096d7bfa007b9115d0360f41fd74b092f41a03c6d622a"} Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.969591 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.969698 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.969862 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:10:20 crc kubenswrapper[4835]: E0201 08:10:20.970323 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:10:21 crc kubenswrapper[4835]: I0201 08:10:21.567106 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:10:21 crc kubenswrapper[4835]: I0201 08:10:21.567193 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:10:21 crc kubenswrapper[4835]: I0201 08:10:21.567223 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:10:21 crc kubenswrapper[4835]: I0201 08:10:21.567288 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:10:21 crc kubenswrapper[4835]: I0201 08:10:21.567296 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:10:21 crc kubenswrapper[4835]: I0201 08:10:21.567340 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:10:21 crc kubenswrapper[4835]: E0201 08:10:21.567714 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:10:23 crc kubenswrapper[4835]: I0201 08:10:23.566912 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:10:23 crc kubenswrapper[4835]: I0201 08:10:23.567273 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:10:23 crc kubenswrapper[4835]: E0201 08:10:23.567548 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:10:24 crc kubenswrapper[4835]: I0201 08:10:24.567202 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:10:24 crc kubenswrapper[4835]: I0201 08:10:24.567654 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:10:24 crc kubenswrapper[4835]: E0201 08:10:24.568865 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.191723 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.191815 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.191882 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.192940 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.193041 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" gracePeriod=600 Feb 01 08:10:25 crc kubenswrapper[4835]: E0201 08:10:25.322447 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.567064 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.567537 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.567579 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.567683 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:10:25 crc kubenswrapper[4835]: E0201 08:10:25.568154 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:10:26 crc kubenswrapper[4835]: I0201 08:10:26.006196 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" exitCode=0 Feb 01 08:10:26 crc kubenswrapper[4835]: I0201 08:10:26.006244 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df"} Feb 01 08:10:26 crc kubenswrapper[4835]: I0201 08:10:26.006275 4835 scope.go:117] "RemoveContainer" containerID="946bdd545dcf0243e8d2cbdd7bcdfb0181a2c4c626eff76dbf1ecf3e068ec549" Feb 01 08:10:26 crc kubenswrapper[4835]: I0201 08:10:26.007328 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:10:26 crc kubenswrapper[4835]: E0201 08:10:26.007842 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:10:33 crc kubenswrapper[4835]: I0201 08:10:33.567992 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:10:33 crc kubenswrapper[4835]: I0201 08:10:33.568431 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:10:33 crc kubenswrapper[4835]: I0201 08:10:33.568464 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:10:33 crc kubenswrapper[4835]: I0201 08:10:33.568532 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:10:33 crc kubenswrapper[4835]: I0201 08:10:33.568543 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:10:33 crc kubenswrapper[4835]: I0201 08:10:33.568589 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:10:33 crc kubenswrapper[4835]: E0201 08:10:33.569015 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:10:33 crc kubenswrapper[4835]: I0201 08:10:33.947072 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6qvjg_86105024-7ff9-4d38-9333-c7c7b241a5c5/kube-rbac-proxy/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.009201 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6qvjg_86105024-7ff9-4d38-9333-c7c7b241a5c5/controller/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.125320 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-frr-files/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.285461 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-frr-files/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.300872 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-reloader/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.322297 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-reloader/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.325876 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-metrics/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.595400 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-metrics/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.596258 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-reloader/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.596867 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-frr-files/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.637358 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-metrics/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.788062 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-metrics/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.816385 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-frr-files/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.865324 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/controller/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.879108 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-reloader/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.053331 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/frr-metrics/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.077584 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/kube-rbac-proxy/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.123094 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/kube-rbac-proxy-frr/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.288928 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/reloader/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.304481 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/frr/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.372139 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7ldwd_e60f3db5-acc8-404c-a98c-6e6bfb05d6e9/frr-k8s-webhook-server/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.502592 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-56dbb5cfb5-ls84h_91863ede-5184-40d2-8fba-1f65d6fdc785/manager/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.552361 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-58b8447d8-56lmr_c2ca8e92-ef3f-442a-830f-0e3c37d76087/webhook-server/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.567038 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.567062 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:10:35 crc kubenswrapper[4835]: E0201 08:10:35.567271 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.567594 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.567659 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.567747 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:10:35 crc kubenswrapper[4835]: E0201 08:10:35.568012 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.716625 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8s85p_0975cec6-f6ff-4188-9435-864a46ad1740/kube-rbac-proxy/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.775798 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8s85p_0975cec6-f6ff-4188-9435-864a46ad1740/speaker/0.log" Feb 01 08:10:38 crc kubenswrapper[4835]: I0201 08:10:38.566106 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:10:38 crc kubenswrapper[4835]: I0201 08:10:38.567665 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:10:38 crc kubenswrapper[4835]: E0201 08:10:38.568099 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:10:39 crc kubenswrapper[4835]: I0201 08:10:39.567688 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:10:39 crc kubenswrapper[4835]: I0201 08:10:39.568172 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:10:39 crc kubenswrapper[4835]: I0201 08:10:39.568230 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:10:39 crc kubenswrapper[4835]: I0201 08:10:39.568387 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:10:39 crc kubenswrapper[4835]: E0201 08:10:39.569089 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:10:41 crc kubenswrapper[4835]: I0201 08:10:41.567082 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:10:41 crc kubenswrapper[4835]: E0201 08:10:41.567721 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.567490 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.568127 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.568311 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:10:48 crc kubenswrapper[4835]: E0201 08:10:48.568392 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.568466 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.568545 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.568642 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.568658 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.568722 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:10:48 crc kubenswrapper[4835]: E0201 08:10:48.849825 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:10:49 crc kubenswrapper[4835]: I0201 08:10:49.194754 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e"} Feb 01 08:10:49 crc kubenswrapper[4835]: I0201 08:10:49.195360 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:10:49 crc kubenswrapper[4835]: I0201 08:10:49.195434 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:10:49 crc kubenswrapper[4835]: I0201 08:10:49.195512 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:10:49 crc kubenswrapper[4835]: I0201 08:10:49.195520 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:10:49 crc kubenswrapper[4835]: I0201 08:10:49.195550 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:10:49 crc kubenswrapper[4835]: E0201 08:10:49.195825 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.567242 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.567310 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.567424 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:10:50 crc kubenswrapper[4835]: E0201 08:10:50.567706 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.650668 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-api-6966d58856-gg77m_6a69ee37-d1ea-4c2f-880a-1edb52d4352c/barbican-api-log/0.log" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.660029 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-api-6966d58856-gg77m_6a69ee37-d1ea-4c2f-880a-1edb52d4352c/barbican-api/0.log" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.794717 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-db-sync-ll8z7_b13e8606-6ec5-4e1b-a3fd-30f8eac5809a/barbican-db-sync/0.log" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.844862 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-keystone-listener-77cb446946-46jb6_8653dceb-2d4e-419e-aa35-37bdca49dc2c/barbican-keystone-listener/0.log" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.991461 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-worker-794b798997-b6znz_c8bf5a1c-707a-4858-a716-7bc593ef0fc3/barbican-worker/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.008959 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-keystone-listener-77cb446946-46jb6_8653dceb-2d4e-419e-aa35-37bdca49dc2c/barbican-keystone-listener-log/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.053581 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-worker-794b798997-b6znz_c8bf5a1c-707a-4858-a716-7bc593ef0fc3/barbican-worker-log/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.291401 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_keystone-cron-29498881-kfzg5_f0c36c8d-897d-4b88-a236-44fe0d511c4e/keystone-cron/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.423568 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_keystone-95fb65664-fmplj_99f218fc-86ce-4952-a7cd-4c80a7cfe774/keystone-api/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.566766 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.567026 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.567047 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.567103 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:10:51 crc kubenswrapper[4835]: E0201 08:10:51.567364 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.597235 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-0_d1414aa9-85a0-4ed8-b897-0afc315eacf6/mysql-bootstrap/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.737657 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-0_d1414aa9-85a0-4ed8-b897-0afc315eacf6/galera/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.742108 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-0_d1414aa9-85a0-4ed8-b897-0afc315eacf6/mysql-bootstrap/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.991417 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-1_b44d32e5-044c-42e2-a6c8-eb93e48219f2/mysql-bootstrap/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.170159 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-1_b44d32e5-044c-42e2-a6c8-eb93e48219f2/mysql-bootstrap/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.177151 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_memcached-0_37529abc-a5d7-416b-8ea4-c6f0542ab3a8/memcached/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.195327 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-1_b44d32e5-044c-42e2-a6c8-eb93e48219f2/galera/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.344811 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-2_f271d73a-6ed8-4c97-b087-c6b3287c11e4/mysql-bootstrap/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.511732 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-2_f271d73a-6ed8-4c97-b087-c6b3287c11e4/mysql-bootstrap/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.519254 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-2_f271d73a-6ed8-4c97-b087-c6b3287c11e4/galera/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.547827 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_rabbitmq-server-0_34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e/setup-container/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.566666 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:10:52 crc kubenswrapper[4835]: E0201 08:10:52.566918 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.567263 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.567280 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:10:52 crc kubenswrapper[4835]: E0201 08:10:52.567465 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.702732 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_rabbitmq-server-0_34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e/setup-container/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.739744 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_rabbitmq-server-0_34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e/rabbitmq/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.776637 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-6c7f677bc9-lq29p_0449d2d9-ddcc-4eaa-84b1-9095448105f5/proxy-httpd/11.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.883259 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-6c7f677bc9-lq29p_0449d2d9-ddcc-4eaa-84b1-9095448105f5/proxy-httpd/11.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.901767 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-6c7f677bc9-lq29p_0449d2d9-ddcc-4eaa-84b1-9095448105f5/proxy-server/9.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.945734 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-6c7f677bc9-lq29p_0449d2d9-ddcc-4eaa-84b1-9095448105f5/proxy-server/9.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.084350 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-7d8cf99555-6vq9r_8ccb8908-ffc6-4032-8907-da7491bf9304/proxy-httpd/15.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.086280 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-7d8cf99555-6vq9r_8ccb8908-ffc6-4032-8907-da7491bf9304/proxy-httpd/15.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.102511 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-7d8cf99555-6vq9r_8ccb8908-ffc6-4032-8907-da7491bf9304/proxy-server/11.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.106995 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-7d8cf99555-6vq9r_8ccb8908-ffc6-4032-8907-da7491bf9304/proxy-server/11.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.267377 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/account-auditor/0.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.455693 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/account-reaper/0.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.457583 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/account-replicator/9.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.466274 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/account-replicator/9.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.507976 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/account-server/0.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.615795 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-auditor/0.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.632948 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-replicator/9.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.655742 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-replicator/9.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.703960 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-server/0.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.777026 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-sharder/9.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.794500 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-sharder/9.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.833501 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-updater/7.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.896715 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-updater/6.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.954288 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/object-auditor/0.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.981275 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/object-expirer/9.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.008692 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/object-expirer/9.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.121610 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/object-server/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.126783 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/object-replicator/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.185756 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/object-updater/6.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.237895 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/object-updater/6.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.272990 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/rsync/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.281575 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/swift-recon-cron/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.347019 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/account-auditor/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.406690 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/account-reaper/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.437763 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/account-replicator/7.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.453208 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/account-replicator/7.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.480672 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/account-server/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.526512 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/container-auditor/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.607684 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/container-replicator/7.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.610636 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/container-server/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.616060 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/container-replicator/7.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.647453 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/container-updater/4.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.705960 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/container-updater/4.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.786083 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/object-auditor/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.789036 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/object-expirer/7.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.806174 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/object-expirer/7.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.842995 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/object-replicator/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.864337 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/object-server/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.940134 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/object-updater/3.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.969063 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/rsync/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.987914 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/object-updater/2.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.123776 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/swift-recon-cron/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.210769 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/account-auditor/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.252807 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/account-replicator/7.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.283369 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/account-reaper/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.285011 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/account-replicator/7.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.309890 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/account-server/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.399831 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/container-auditor/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.409942 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/container-replicator/7.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.439724 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/container-replicator/7.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.498945 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/container-server/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.499188 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/container-updater/4.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.553438 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/container-updater/3.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.598157 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/object-auditor/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.620478 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/object-expirer/7.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.679108 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/object-replicator/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.683156 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/object-expirer/7.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.704476 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/object-server/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.782664 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/object-updater/5.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.814499 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/object-updater/4.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.855278 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/rsync/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.865224 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/swift-recon-cron/0.log" Feb 01 08:10:59 crc kubenswrapper[4835]: I0201 08:10:59.569753 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:10:59 crc kubenswrapper[4835]: I0201 08:10:59.570302 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:10:59 crc kubenswrapper[4835]: E0201 08:10:59.570562 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:11:00 crc kubenswrapper[4835]: I0201 08:11:00.568146 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:11:00 crc kubenswrapper[4835]: I0201 08:11:00.568288 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:11:00 crc kubenswrapper[4835]: I0201 08:11:00.568494 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:11:00 crc kubenswrapper[4835]: I0201 08:11:00.568512 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:11:00 crc kubenswrapper[4835]: I0201 08:11:00.568579 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:11:00 crc kubenswrapper[4835]: E0201 08:11:00.569207 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:11:04 crc kubenswrapper[4835]: I0201 08:11:04.567515 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:11:04 crc kubenswrapper[4835]: I0201 08:11:04.568176 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:11:04 crc kubenswrapper[4835]: I0201 08:11:04.568255 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:11:04 crc kubenswrapper[4835]: E0201 08:11:04.568267 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:11:04 crc kubenswrapper[4835]: I0201 08:11:04.568344 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:11:04 crc kubenswrapper[4835]: E0201 08:11:04.568639 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:11:05 crc kubenswrapper[4835]: I0201 08:11:05.567863 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:11:05 crc kubenswrapper[4835]: I0201 08:11:05.567932 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:11:05 crc kubenswrapper[4835]: I0201 08:11:05.567951 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:11:05 crc kubenswrapper[4835]: I0201 08:11:05.568008 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:11:05 crc kubenswrapper[4835]: E0201 08:11:05.568277 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:11:05 crc kubenswrapper[4835]: I0201 08:11:05.597763 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:11:05 crc kubenswrapper[4835]: E0201 08:11:05.597991 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:11:05 crc kubenswrapper[4835]: E0201 08:11:05.598113 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:13:07.598082888 +0000 UTC m=+3060.718519362 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:11:06 crc kubenswrapper[4835]: I0201 08:11:06.566984 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:11:06 crc kubenswrapper[4835]: I0201 08:11:06.567024 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:11:06 crc kubenswrapper[4835]: E0201 08:11:06.567297 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.143247 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_042bee18-1826-42db-a17a-6f0e3d488c16/util/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.316039 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_042bee18-1826-42db-a17a-6f0e3d488c16/util/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.340309 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_042bee18-1826-42db-a17a-6f0e3d488c16/pull/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.373749 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_042bee18-1826-42db-a17a-6f0e3d488c16/pull/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.559854 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_042bee18-1826-42db-a17a-6f0e3d488c16/pull/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.561084 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_042bee18-1826-42db-a17a-6f0e3d488c16/util/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.561521 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_042bee18-1826-42db-a17a-6f0e3d488c16/extract/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.713759 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wqgsq_5cb5bbc9-0e87-45ed-897f-6e343be075d5/extract-utilities/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.902976 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wqgsq_5cb5bbc9-0e87-45ed-897f-6e343be075d5/extract-utilities/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.917660 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wqgsq_5cb5bbc9-0e87-45ed-897f-6e343be075d5/extract-content/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.917828 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wqgsq_5cb5bbc9-0e87-45ed-897f-6e343be075d5/extract-content/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.095033 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wqgsq_5cb5bbc9-0e87-45ed-897f-6e343be075d5/extract-content/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.169125 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wqgsq_5cb5bbc9-0e87-45ed-897f-6e343be075d5/extract-utilities/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.339587 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w65gv_7f1e8788-786f-4f9d-b492-3a036764b28d/extract-utilities/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.564694 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w65gv_7f1e8788-786f-4f9d-b492-3a036764b28d/extract-content/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.564791 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w65gv_7f1e8788-786f-4f9d-b492-3a036764b28d/extract-utilities/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.598163 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w65gv_7f1e8788-786f-4f9d-b492-3a036764b28d/extract-content/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.637278 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wqgsq_5cb5bbc9-0e87-45ed-897f-6e343be075d5/registry-server/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.783196 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w65gv_7f1e8788-786f-4f9d-b492-3a036764b28d/extract-utilities/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.803526 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w65gv_7f1e8788-786f-4f9d-b492-3a036764b28d/extract-content/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.097980 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w65gv_7f1e8788-786f-4f9d-b492-3a036764b28d/registry-server/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.117478 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-www9n_c2481990-b703-4792-b5b0-549daf22e66a/marketplace-operator/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.134644 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ghmxq_0155c2ce-1bd0-424d-931f-132c22e7a42e/extract-utilities/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.305839 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ghmxq_0155c2ce-1bd0-424d-931f-132c22e7a42e/extract-utilities/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.326858 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ghmxq_0155c2ce-1bd0-424d-931f-132c22e7a42e/extract-content/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.332567 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ghmxq_0155c2ce-1bd0-424d-931f-132c22e7a42e/extract-content/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.506169 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ghmxq_0155c2ce-1bd0-424d-931f-132c22e7a42e/extract-utilities/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.511206 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ghmxq_0155c2ce-1bd0-424d-931f-132c22e7a42e/extract-content/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.566736 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.567108 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.567205 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.567217 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.567260 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:11:11 crc kubenswrapper[4835]: E0201 08:11:11.567649 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.608837 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ghmxq_0155c2ce-1bd0-424d-931f-132c22e7a42e/registry-server/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.724932 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-75mhs_5fead728-7b7f-4ee9-b01e-455d536a88c5/extract-utilities/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.890447 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-75mhs_5fead728-7b7f-4ee9-b01e-455d536a88c5/extract-content/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.912267 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-75mhs_5fead728-7b7f-4ee9-b01e-455d536a88c5/extract-utilities/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.932063 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-75mhs_5fead728-7b7f-4ee9-b01e-455d536a88c5/extract-content/0.log" Feb 01 08:11:12 crc kubenswrapper[4835]: I0201 08:11:12.074622 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-75mhs_5fead728-7b7f-4ee9-b01e-455d536a88c5/extract-utilities/0.log" Feb 01 08:11:12 crc kubenswrapper[4835]: I0201 08:11:12.128376 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-75mhs_5fead728-7b7f-4ee9-b01e-455d536a88c5/extract-content/0.log" Feb 01 08:11:12 crc kubenswrapper[4835]: I0201 08:11:12.308591 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-75mhs_5fead728-7b7f-4ee9-b01e-455d536a88c5/registry-server/0.log" Feb 01 08:11:14 crc kubenswrapper[4835]: I0201 08:11:14.567009 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:11:14 crc kubenswrapper[4835]: I0201 08:11:14.567495 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:11:14 crc kubenswrapper[4835]: E0201 08:11:14.567975 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:11:16 crc kubenswrapper[4835]: I0201 08:11:16.568464 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:11:16 crc kubenswrapper[4835]: I0201 08:11:16.568604 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:11:16 crc kubenswrapper[4835]: I0201 08:11:16.568644 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:11:16 crc kubenswrapper[4835]: I0201 08:11:16.568811 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:11:16 crc kubenswrapper[4835]: I0201 08:11:16.568996 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:11:16 crc kubenswrapper[4835]: E0201 08:11:16.569018 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:11:16 crc kubenswrapper[4835]: E0201 08:11:16.569458 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:11:18 crc kubenswrapper[4835]: I0201 08:11:18.567379 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:11:18 crc kubenswrapper[4835]: I0201 08:11:18.567776 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:11:18 crc kubenswrapper[4835]: I0201 08:11:18.567863 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:11:18 crc kubenswrapper[4835]: E0201 08:11:18.568177 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:11:21 crc kubenswrapper[4835]: I0201 08:11:21.567533 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:11:21 crc kubenswrapper[4835]: I0201 08:11:21.567891 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:11:21 crc kubenswrapper[4835]: E0201 08:11:21.568268 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:11:21 crc kubenswrapper[4835]: E0201 08:11:21.706081 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:11:22 crc kubenswrapper[4835]: I0201 08:11:22.447752 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:11:26 crc kubenswrapper[4835]: I0201 08:11:26.567572 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:11:26 crc kubenswrapper[4835]: I0201 08:11:26.567871 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:11:26 crc kubenswrapper[4835]: I0201 08:11:26.567944 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:11:26 crc kubenswrapper[4835]: I0201 08:11:26.567951 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:11:26 crc kubenswrapper[4835]: I0201 08:11:26.567982 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:11:26 crc kubenswrapper[4835]: E0201 08:11:26.568283 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:11:27 crc kubenswrapper[4835]: I0201 08:11:27.580848 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:11:27 crc kubenswrapper[4835]: I0201 08:11:27.580890 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:11:27 crc kubenswrapper[4835]: I0201 08:11:27.580937 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:11:27 crc kubenswrapper[4835]: E0201 08:11:27.581449 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:11:27 crc kubenswrapper[4835]: E0201 08:11:27.581533 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:11:29 crc kubenswrapper[4835]: I0201 08:11:29.567898 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:11:29 crc kubenswrapper[4835]: I0201 08:11:29.568023 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:11:29 crc kubenswrapper[4835]: I0201 08:11:29.568186 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:11:29 crc kubenswrapper[4835]: E0201 08:11:29.568834 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:11:31 crc kubenswrapper[4835]: I0201 08:11:31.568186 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:11:31 crc kubenswrapper[4835]: I0201 08:11:31.568635 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:11:31 crc kubenswrapper[4835]: I0201 08:11:31.568682 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:11:31 crc kubenswrapper[4835]: I0201 08:11:31.568819 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:11:31 crc kubenswrapper[4835]: E0201 08:11:31.810705 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:11:32 crc kubenswrapper[4835]: I0201 08:11:32.537734 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0"} Feb 01 08:11:32 crc kubenswrapper[4835]: I0201 08:11:32.538575 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:11:32 crc kubenswrapper[4835]: I0201 08:11:32.538652 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:11:32 crc kubenswrapper[4835]: I0201 08:11:32.538767 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:11:32 crc kubenswrapper[4835]: E0201 08:11:32.539241 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.567167 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.567780 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:11:36 crc kubenswrapper[4835]: E0201 08:11:36.568122 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.582440 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" exitCode=1 Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.582497 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e"} Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.582564 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.583689 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.583800 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.583853 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.583965 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.583985 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.584052 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:11:36 crc kubenswrapper[4835]: E0201 08:11:36.584736 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:11:39 crc kubenswrapper[4835]: I0201 08:11:39.567309 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:11:39 crc kubenswrapper[4835]: E0201 08:11:39.567830 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:11:40 crc kubenswrapper[4835]: I0201 08:11:40.567758 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:11:40 crc kubenswrapper[4835]: I0201 08:11:40.568166 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:11:40 crc kubenswrapper[4835]: E0201 08:11:40.568639 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:11:43 crc kubenswrapper[4835]: I0201 08:11:43.567641 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:11:43 crc kubenswrapper[4835]: I0201 08:11:43.568008 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:11:43 crc kubenswrapper[4835]: I0201 08:11:43.568096 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:11:43 crc kubenswrapper[4835]: E0201 08:11:43.572370 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:11:44 crc kubenswrapper[4835]: I0201 08:11:44.568140 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:11:44 crc kubenswrapper[4835]: I0201 08:11:44.568336 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:11:44 crc kubenswrapper[4835]: I0201 08:11:44.568599 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:11:44 crc kubenswrapper[4835]: E0201 08:11:44.569626 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.566892 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.567227 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.569171 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.569488 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.569586 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.569684 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.569703 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.569781 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:11:50 crc kubenswrapper[4835]: E0201 08:11:50.571885 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:11:50 crc kubenswrapper[4835]: E0201 08:11:50.572072 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:11:51 crc kubenswrapper[4835]: I0201 08:11:51.575192 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:11:51 crc kubenswrapper[4835]: E0201 08:11:51.576866 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:11:51 crc kubenswrapper[4835]: I0201 08:11:51.579097 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:11:51 crc kubenswrapper[4835]: I0201 08:11:51.579283 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:11:51 crc kubenswrapper[4835]: E0201 08:11:51.821512 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:11:52 crc kubenswrapper[4835]: I0201 08:11:52.737030 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"b0762087810217898e4c9db3485210e50096a89f21ff2bb70ea52611f0c43b3e"} Feb 01 08:11:52 crc kubenswrapper[4835]: I0201 08:11:52.737733 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:11:52 crc kubenswrapper[4835]: I0201 08:11:52.738314 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:11:52 crc kubenswrapper[4835]: E0201 08:11:52.738628 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:11:53 crc kubenswrapper[4835]: I0201 08:11:53.753216 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:11:53 crc kubenswrapper[4835]: E0201 08:11:53.753695 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:11:56 crc kubenswrapper[4835]: I0201 08:11:56.567553 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:11:56 crc kubenswrapper[4835]: I0201 08:11:56.568122 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:11:56 crc kubenswrapper[4835]: I0201 08:11:56.568305 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:11:56 crc kubenswrapper[4835]: E0201 08:11:56.568933 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:11:56 crc kubenswrapper[4835]: I0201 08:11:56.571439 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:11:56 crc kubenswrapper[4835]: I0201 08:11:56.571561 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:11:56 crc kubenswrapper[4835]: I0201 08:11:56.571747 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:11:56 crc kubenswrapper[4835]: E0201 08:11:56.572166 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:11:58 crc kubenswrapper[4835]: I0201 08:11:58.022281 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:00 crc kubenswrapper[4835]: I0201 08:12:00.021554 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:01 crc kubenswrapper[4835]: I0201 08:12:01.023658 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.021015 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.021518 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.022008 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"b0762087810217898e4c9db3485210e50096a89f21ff2bb70ea52611f0c43b3e"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.022027 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.022055 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://b0762087810217898e4c9db3485210e50096a89f21ff2bb70ea52611f0c43b3e" gracePeriod=30 Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.023137 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:04 crc kubenswrapper[4835]: E0201 08:12:04.321880 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.567584 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.567994 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.568039 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.568078 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.568314 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.568486 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.568510 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:12:04 crc kubenswrapper[4835]: E0201 08:12:04.568502 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.568586 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:12:04 crc kubenswrapper[4835]: E0201 08:12:04.785220 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.869233 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="b0762087810217898e4c9db3485210e50096a89f21ff2bb70ea52611f0c43b3e" exitCode=0 Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.869358 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"b0762087810217898e4c9db3485210e50096a89f21ff2bb70ea52611f0c43b3e"} Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.869393 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395"} Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.869438 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.870222 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:04 crc kubenswrapper[4835]: E0201 08:12:04.870481 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.869976 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.890361 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"0497b6fa825fe5c685a142a45b83cba6c78cee875feeb8c8d363023fb9cbab30"} Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.891204 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.891287 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.891317 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.891395 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.891461 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:12:04 crc kubenswrapper[4835]: E0201 08:12:04.891794 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:12:05 crc kubenswrapper[4835]: I0201 08:12:05.909961 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:05 crc kubenswrapper[4835]: E0201 08:12:05.910237 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:06 crc kubenswrapper[4835]: I0201 08:12:06.568085 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:12:06 crc kubenswrapper[4835]: E0201 08:12:06.568694 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:12:08 crc kubenswrapper[4835]: I0201 08:12:08.567191 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:12:08 crc kubenswrapper[4835]: I0201 08:12:08.567594 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:12:08 crc kubenswrapper[4835]: I0201 08:12:08.567645 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:12:08 crc kubenswrapper[4835]: I0201 08:12:08.567710 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:12:08 crc kubenswrapper[4835]: I0201 08:12:08.567719 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:12:08 crc kubenswrapper[4835]: I0201 08:12:08.567794 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:12:08 crc kubenswrapper[4835]: E0201 08:12:08.568025 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:12:08 crc kubenswrapper[4835]: E0201 08:12:08.568120 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:12:10 crc kubenswrapper[4835]: I0201 08:12:10.020901 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:10 crc kubenswrapper[4835]: I0201 08:12:10.021204 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.020453 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.585315 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ht7np"] Feb 01 08:12:13 crc kubenswrapper[4835]: E0201 08:12:13.587603 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="extract-content" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.587645 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="extract-content" Feb 01 08:12:13 crc kubenswrapper[4835]: E0201 08:12:13.587684 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="extract-utilities" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.587701 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="extract-utilities" Feb 01 08:12:13 crc kubenswrapper[4835]: E0201 08:12:13.587724 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="registry-server" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.587734 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="registry-server" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.588115 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="registry-server" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.590197 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.608155 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ht7np"] Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.689697 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-catalog-content\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.689750 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-utilities\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.690050 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dhhb\" (UniqueName: \"kubernetes.io/projected/83bc0253-a027-4b59-ae32-e1c1279057c8-kube-api-access-8dhhb\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.791975 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dhhb\" (UniqueName: \"kubernetes.io/projected/83bc0253-a027-4b59-ae32-e1c1279057c8-kube-api-access-8dhhb\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.792067 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-catalog-content\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.792097 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-utilities\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.792582 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-catalog-content\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.792607 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-utilities\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.823162 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dhhb\" (UniqueName: \"kubernetes.io/projected/83bc0253-a027-4b59-ae32-e1c1279057c8-kube-api-access-8dhhb\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.907843 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:14 crc kubenswrapper[4835]: I0201 08:12:14.379740 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ht7np"] Feb 01 08:12:14 crc kubenswrapper[4835]: I0201 08:12:14.993036 4835 generic.go:334] "Generic (PLEG): container finished" podID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerID="257999e3db2b74db22341c8d2cd296a015048fbe6925996c672901787785ecff" exitCode=0 Feb 01 08:12:14 crc kubenswrapper[4835]: I0201 08:12:14.993109 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ht7np" event={"ID":"83bc0253-a027-4b59-ae32-e1c1279057c8","Type":"ContainerDied","Data":"257999e3db2b74db22341c8d2cd296a015048fbe6925996c672901787785ecff"} Feb 01 08:12:14 crc kubenswrapper[4835]: I0201 08:12:14.993155 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ht7np" event={"ID":"83bc0253-a027-4b59-ae32-e1c1279057c8","Type":"ContainerStarted","Data":"1adfdeac740f8473883190bbbf16ff1b597929b666ea3516bfd2b4a2d6d415b6"} Feb 01 08:12:15 crc kubenswrapper[4835]: I0201 08:12:15.022695 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:16 crc kubenswrapper[4835]: I0201 08:12:16.004985 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ht7np" event={"ID":"83bc0253-a027-4b59-ae32-e1c1279057c8","Type":"ContainerStarted","Data":"615680d44233ef1c6513a02550240e25fb6a1832f88f299d98d631aaa5490d5a"} Feb 01 08:12:16 crc kubenswrapper[4835]: I0201 08:12:16.022489 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:16 crc kubenswrapper[4835]: I0201 08:12:16.022571 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:12:16 crc kubenswrapper[4835]: I0201 08:12:16.023382 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:12:16 crc kubenswrapper[4835]: I0201 08:12:16.023422 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:16 crc kubenswrapper[4835]: I0201 08:12:16.023453 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" gracePeriod=30 Feb 01 08:12:16 crc kubenswrapper[4835]: I0201 08:12:16.025962 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:16 crc kubenswrapper[4835]: E0201 08:12:16.186874 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.016641 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" exitCode=0 Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.016924 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395"} Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.018173 4835 scope.go:117] "RemoveContainer" containerID="b0762087810217898e4c9db3485210e50096a89f21ff2bb70ea52611f0c43b3e" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.019494 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.019701 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.020236 4835 generic.go:334] "Generic (PLEG): container finished" podID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerID="615680d44233ef1c6513a02550240e25fb6a1832f88f299d98d631aaa5490d5a" exitCode=0 Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.020282 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ht7np" event={"ID":"83bc0253-a027-4b59-ae32-e1c1279057c8","Type":"ContainerDied","Data":"615680d44233ef1c6513a02550240e25fb6a1832f88f299d98d631aaa5490d5a"} Feb 01 08:12:17 crc kubenswrapper[4835]: E0201 08:12:17.020802 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.575027 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:12:17 crc kubenswrapper[4835]: E0201 08:12:17.575341 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.575496 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.575518 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:12:17 crc kubenswrapper[4835]: E0201 08:12:17.575744 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.591985 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wxb"] Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.593780 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.601174 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wxb"] Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.659039 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pvh8\" (UniqueName: \"kubernetes.io/projected/c90c5237-f023-4eab-b902-e86f65ad245e-kube-api-access-6pvh8\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.659142 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-catalog-content\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.659207 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-utilities\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.761246 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-catalog-content\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.761329 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-utilities\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.761639 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pvh8\" (UniqueName: \"kubernetes.io/projected/c90c5237-f023-4eab-b902-e86f65ad245e-kube-api-access-6pvh8\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.761786 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-catalog-content\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.761875 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-utilities\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.787351 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pvh8\" (UniqueName: \"kubernetes.io/projected/c90c5237-f023-4eab-b902-e86f65ad245e-kube-api-access-6pvh8\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.956534 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.032397 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ht7np" event={"ID":"83bc0253-a027-4b59-ae32-e1c1279057c8","Type":"ContainerStarted","Data":"9112aca4ab0f886aa1e24b4cfd392caaf93cbfb45f02e398024e566fc9d33796"} Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.053057 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ht7np" podStartSLOduration=2.535189397 podStartE2EDuration="5.053037624s" podCreationTimestamp="2026-02-01 08:12:13 +0000 UTC" firstStartedPulling="2026-02-01 08:12:14.995329514 +0000 UTC m=+3008.115765988" lastFinishedPulling="2026-02-01 08:12:17.513177771 +0000 UTC m=+3010.633614215" observedRunningTime="2026-02-01 08:12:18.051718989 +0000 UTC m=+3011.172155443" watchObservedRunningTime="2026-02-01 08:12:18.053037624 +0000 UTC m=+3011.173474068" Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.451189 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wxb"] Feb 01 08:12:18 crc kubenswrapper[4835]: W0201 08:12:18.473726 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc90c5237_f023_4eab_b902_e86f65ad245e.slice/crio-64a069c0b4701b893153030422257b773fb329cea9fbbba5ba7eb08fccd5f729 WatchSource:0}: Error finding container 64a069c0b4701b893153030422257b773fb329cea9fbbba5ba7eb08fccd5f729: Status 404 returned error can't find the container with id 64a069c0b4701b893153030422257b773fb329cea9fbbba5ba7eb08fccd5f729 Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.567111 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.567208 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.567235 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.567320 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.567361 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:12:18 crc kubenswrapper[4835]: E0201 08:12:18.570652 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:12:19 crc kubenswrapper[4835]: I0201 08:12:19.043540 4835 generic.go:334] "Generic (PLEG): container finished" podID="c90c5237-f023-4eab-b902-e86f65ad245e" containerID="93f93e04d07dd9952f9910d6a6142e0a0e711c59737c2a3528a5b1405391d8eb" exitCode=0 Feb 01 08:12:19 crc kubenswrapper[4835]: I0201 08:12:19.043622 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wxb" event={"ID":"c90c5237-f023-4eab-b902-e86f65ad245e","Type":"ContainerDied","Data":"93f93e04d07dd9952f9910d6a6142e0a0e711c59737c2a3528a5b1405391d8eb"} Feb 01 08:12:19 crc kubenswrapper[4835]: I0201 08:12:19.043693 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wxb" event={"ID":"c90c5237-f023-4eab-b902-e86f65ad245e","Type":"ContainerStarted","Data":"64a069c0b4701b893153030422257b773fb329cea9fbbba5ba7eb08fccd5f729"} Feb 01 08:12:20 crc kubenswrapper[4835]: I0201 08:12:20.055484 4835 generic.go:334] "Generic (PLEG): container finished" podID="c90c5237-f023-4eab-b902-e86f65ad245e" containerID="71d1bcb8cf09316b240e912a0cd55b9e033653be06919c5b8fd25c715b25b972" exitCode=0 Feb 01 08:12:20 crc kubenswrapper[4835]: I0201 08:12:20.055836 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wxb" event={"ID":"c90c5237-f023-4eab-b902-e86f65ad245e","Type":"ContainerDied","Data":"71d1bcb8cf09316b240e912a0cd55b9e033653be06919c5b8fd25c715b25b972"} Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.076937 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wxb" event={"ID":"c90c5237-f023-4eab-b902-e86f65ad245e","Type":"ContainerStarted","Data":"a3e342f4cc3d9d80d0cf07fb396f5d94d0890fcb4992a1a13698a0ae50be4930"} Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.121114 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g6wxb" podStartSLOduration=2.648621941 podStartE2EDuration="4.121088874s" podCreationTimestamp="2026-02-01 08:12:17 +0000 UTC" firstStartedPulling="2026-02-01 08:12:19.045893486 +0000 UTC m=+3012.166329920" lastFinishedPulling="2026-02-01 08:12:20.518360409 +0000 UTC m=+3013.638796853" observedRunningTime="2026-02-01 08:12:21.109612872 +0000 UTC m=+3014.230049336" watchObservedRunningTime="2026-02-01 08:12:21.121088874 +0000 UTC m=+3014.241525338" Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.567243 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.567352 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.567543 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:12:21 crc kubenswrapper[4835]: E0201 08:12:21.567913 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.568198 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.568402 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.568683 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:12:21 crc kubenswrapper[4835]: E0201 08:12:21.569155 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:12:23 crc kubenswrapper[4835]: I0201 08:12:23.908401 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:23 crc kubenswrapper[4835]: I0201 08:12:23.908702 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:23 crc kubenswrapper[4835]: I0201 08:12:23.964241 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:24 crc kubenswrapper[4835]: I0201 08:12:24.175753 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:26 crc kubenswrapper[4835]: I0201 08:12:26.146755 4835 generic.go:334] "Generic (PLEG): container finished" podID="dfdcbe67-d5e0-4882-b2d9-e039513a25f0" containerID="ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6" exitCode=0 Feb 01 08:12:26 crc kubenswrapper[4835]: I0201 08:12:26.146859 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" event={"ID":"dfdcbe67-d5e0-4882-b2d9-e039513a25f0","Type":"ContainerDied","Data":"ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6"} Feb 01 08:12:26 crc kubenswrapper[4835]: I0201 08:12:26.148817 4835 scope.go:117] "RemoveContainer" containerID="ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6" Feb 01 08:12:26 crc kubenswrapper[4835]: I0201 08:12:26.534003 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vwdqc_must-gather-c7xxg_dfdcbe67-d5e0-4882-b2d9-e039513a25f0/gather/0.log" Feb 01 08:12:26 crc kubenswrapper[4835]: I0201 08:12:26.761668 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ht7np"] Feb 01 08:12:26 crc kubenswrapper[4835]: I0201 08:12:26.761975 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ht7np" podUID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerName="registry-server" containerID="cri-o://9112aca4ab0f886aa1e24b4cfd392caaf93cbfb45f02e398024e566fc9d33796" gracePeriod=2 Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.159751 4835 generic.go:334] "Generic (PLEG): container finished" podID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerID="9112aca4ab0f886aa1e24b4cfd392caaf93cbfb45f02e398024e566fc9d33796" exitCode=0 Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.159792 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ht7np" event={"ID":"83bc0253-a027-4b59-ae32-e1c1279057c8","Type":"ContainerDied","Data":"9112aca4ab0f886aa1e24b4cfd392caaf93cbfb45f02e398024e566fc9d33796"} Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.159817 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ht7np" event={"ID":"83bc0253-a027-4b59-ae32-e1c1279057c8","Type":"ContainerDied","Data":"1adfdeac740f8473883190bbbf16ff1b597929b666ea3516bfd2b4a2d6d415b6"} Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.159829 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1adfdeac740f8473883190bbbf16ff1b597929b666ea3516bfd2b4a2d6d415b6" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.194771 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.256061 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dhhb\" (UniqueName: \"kubernetes.io/projected/83bc0253-a027-4b59-ae32-e1c1279057c8-kube-api-access-8dhhb\") pod \"83bc0253-a027-4b59-ae32-e1c1279057c8\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.256137 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-utilities\") pod \"83bc0253-a027-4b59-ae32-e1c1279057c8\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.256296 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-catalog-content\") pod \"83bc0253-a027-4b59-ae32-e1c1279057c8\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.257154 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-utilities" (OuterVolumeSpecName: "utilities") pod "83bc0253-a027-4b59-ae32-e1c1279057c8" (UID: "83bc0253-a027-4b59-ae32-e1c1279057c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.261891 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83bc0253-a027-4b59-ae32-e1c1279057c8-kube-api-access-8dhhb" (OuterVolumeSpecName: "kube-api-access-8dhhb") pod "83bc0253-a027-4b59-ae32-e1c1279057c8" (UID: "83bc0253-a027-4b59-ae32-e1c1279057c8"). InnerVolumeSpecName "kube-api-access-8dhhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.314745 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83bc0253-a027-4b59-ae32-e1c1279057c8" (UID: "83bc0253-a027-4b59-ae32-e1c1279057c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.358490 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dhhb\" (UniqueName: \"kubernetes.io/projected/83bc0253-a027-4b59-ae32-e1c1279057c8-kube-api-access-8dhhb\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.358531 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.358546 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.958468 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.958898 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:28 crc kubenswrapper[4835]: I0201 08:12:28.095517 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:28 crc kubenswrapper[4835]: I0201 08:12:28.170099 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:28 crc kubenswrapper[4835]: I0201 08:12:28.197015 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ht7np"] Feb 01 08:12:28 crc kubenswrapper[4835]: I0201 08:12:28.202549 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ht7np"] Feb 01 08:12:28 crc kubenswrapper[4835]: I0201 08:12:28.209337 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:29 crc kubenswrapper[4835]: I0201 08:12:29.568131 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:12:29 crc kubenswrapper[4835]: E0201 08:12:29.568375 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:12:29 crc kubenswrapper[4835]: I0201 08:12:29.578909 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83bc0253-a027-4b59-ae32-e1c1279057c8" path="/var/lib/kubelet/pods/83bc0253-a027-4b59-ae32-e1c1279057c8/volumes" Feb 01 08:12:30 crc kubenswrapper[4835]: I0201 08:12:30.567259 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:12:30 crc kubenswrapper[4835]: I0201 08:12:30.567614 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:30 crc kubenswrapper[4835]: E0201 08:12:30.567815 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:31 crc kubenswrapper[4835]: I0201 08:12:31.567617 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:12:31 crc kubenswrapper[4835]: I0201 08:12:31.568462 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:12:31 crc kubenswrapper[4835]: E0201 08:12:31.569047 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:12:32 crc kubenswrapper[4835]: I0201 08:12:32.766682 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wxb"] Feb 01 08:12:32 crc kubenswrapper[4835]: I0201 08:12:32.767351 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g6wxb" podUID="c90c5237-f023-4eab-b902-e86f65ad245e" containerName="registry-server" containerID="cri-o://a3e342f4cc3d9d80d0cf07fb396f5d94d0890fcb4992a1a13698a0ae50be4930" gracePeriod=2 Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.240351 4835 generic.go:334] "Generic (PLEG): container finished" podID="c90c5237-f023-4eab-b902-e86f65ad245e" containerID="a3e342f4cc3d9d80d0cf07fb396f5d94d0890fcb4992a1a13698a0ae50be4930" exitCode=0 Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.240840 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wxb" event={"ID":"c90c5237-f023-4eab-b902-e86f65ad245e","Type":"ContainerDied","Data":"a3e342f4cc3d9d80d0cf07fb396f5d94d0890fcb4992a1a13698a0ae50be4930"} Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.308185 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.370185 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-utilities\") pod \"c90c5237-f023-4eab-b902-e86f65ad245e\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.370275 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-catalog-content\") pod \"c90c5237-f023-4eab-b902-e86f65ad245e\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.370400 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pvh8\" (UniqueName: \"kubernetes.io/projected/c90c5237-f023-4eab-b902-e86f65ad245e-kube-api-access-6pvh8\") pod \"c90c5237-f023-4eab-b902-e86f65ad245e\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.371396 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-utilities" (OuterVolumeSpecName: "utilities") pod "c90c5237-f023-4eab-b902-e86f65ad245e" (UID: "c90c5237-f023-4eab-b902-e86f65ad245e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.377016 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c90c5237-f023-4eab-b902-e86f65ad245e-kube-api-access-6pvh8" (OuterVolumeSpecName: "kube-api-access-6pvh8") pod "c90c5237-f023-4eab-b902-e86f65ad245e" (UID: "c90c5237-f023-4eab-b902-e86f65ad245e"). InnerVolumeSpecName "kube-api-access-6pvh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.395943 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c90c5237-f023-4eab-b902-e86f65ad245e" (UID: "c90c5237-f023-4eab-b902-e86f65ad245e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.472781 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.472825 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.472842 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pvh8\" (UniqueName: \"kubernetes.io/projected/c90c5237-f023-4eab-b902-e86f65ad245e-kube-api-access-6pvh8\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.569592 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.569685 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.569714 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.569830 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.569876 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:12:33 crc kubenswrapper[4835]: E0201 08:12:33.570240 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.862380 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vwdqc/must-gather-c7xxg"] Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.863184 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" podUID="dfdcbe67-d5e0-4882-b2d9-e039513a25f0" containerName="copy" containerID="cri-o://275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7" gracePeriod=2 Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.868614 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vwdqc/must-gather-c7xxg"] Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.237053 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vwdqc_must-gather-c7xxg_dfdcbe67-d5e0-4882-b2d9-e039513a25f0/copy/0.log" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.237985 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.249605 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vwdqc_must-gather-c7xxg_dfdcbe67-d5e0-4882-b2d9-e039513a25f0/copy/0.log" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.250039 4835 generic.go:334] "Generic (PLEG): container finished" podID="dfdcbe67-d5e0-4882-b2d9-e039513a25f0" containerID="275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7" exitCode=143 Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.250116 4835 scope.go:117] "RemoveContainer" containerID="275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.250251 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.254104 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wxb" event={"ID":"c90c5237-f023-4eab-b902-e86f65ad245e","Type":"ContainerDied","Data":"64a069c0b4701b893153030422257b773fb329cea9fbbba5ba7eb08fccd5f729"} Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.254198 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.287071 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wxb"] Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.287747 4835 scope.go:117] "RemoveContainer" containerID="ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.292569 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wxb"] Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.292766 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2blml\" (UniqueName: \"kubernetes.io/projected/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-kube-api-access-2blml\") pod \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.292989 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-must-gather-output\") pod \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.300624 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-kube-api-access-2blml" (OuterVolumeSpecName: "kube-api-access-2blml") pod "dfdcbe67-d5e0-4882-b2d9-e039513a25f0" (UID: "dfdcbe67-d5e0-4882-b2d9-e039513a25f0"). InnerVolumeSpecName "kube-api-access-2blml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.341259 4835 scope.go:117] "RemoveContainer" containerID="275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7" Feb 01 08:12:34 crc kubenswrapper[4835]: E0201 08:12:34.341923 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7\": container with ID starting with 275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7 not found: ID does not exist" containerID="275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.341970 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7"} err="failed to get container status \"275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7\": rpc error: code = NotFound desc = could not find container \"275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7\": container with ID starting with 275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7 not found: ID does not exist" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.342000 4835 scope.go:117] "RemoveContainer" containerID="ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6" Feb 01 08:12:34 crc kubenswrapper[4835]: E0201 08:12:34.342480 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6\": container with ID starting with ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6 not found: ID does not exist" containerID="ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.342546 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6"} err="failed to get container status \"ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6\": rpc error: code = NotFound desc = could not find container \"ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6\": container with ID starting with ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6 not found: ID does not exist" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.342566 4835 scope.go:117] "RemoveContainer" containerID="a3e342f4cc3d9d80d0cf07fb396f5d94d0890fcb4992a1a13698a0ae50be4930" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.375503 4835 scope.go:117] "RemoveContainer" containerID="71d1bcb8cf09316b240e912a0cd55b9e033653be06919c5b8fd25c715b25b972" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.384167 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "dfdcbe67-d5e0-4882-b2d9-e039513a25f0" (UID: "dfdcbe67-d5e0-4882-b2d9-e039513a25f0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.395272 4835 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.395312 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2blml\" (UniqueName: \"kubernetes.io/projected/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-kube-api-access-2blml\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.400742 4835 scope.go:117] "RemoveContainer" containerID="93f93e04d07dd9952f9910d6a6142e0a0e711c59737c2a3528a5b1405391d8eb" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.567352 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.567972 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.568025 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.568080 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.568100 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.568209 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:12:35 crc kubenswrapper[4835]: E0201 08:12:35.568450 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:12:35 crc kubenswrapper[4835]: E0201 08:12:35.568507 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.580359 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c90c5237-f023-4eab-b902-e86f65ad245e" path="/var/lib/kubelet/pods/c90c5237-f023-4eab-b902-e86f65ad245e/volumes" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.581613 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfdcbe67-d5e0-4882-b2d9-e039513a25f0" path="/var/lib/kubelet/pods/dfdcbe67-d5e0-4882-b2d9-e039513a25f0/volumes" Feb 01 08:12:40 crc kubenswrapper[4835]: I0201 08:12:40.566983 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:12:40 crc kubenswrapper[4835]: E0201 08:12:40.567785 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:12:44 crc kubenswrapper[4835]: I0201 08:12:44.567643 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:12:44 crc kubenswrapper[4835]: I0201 08:12:44.567976 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:44 crc kubenswrapper[4835]: E0201 08:12:44.568345 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:45 crc kubenswrapper[4835]: I0201 08:12:45.567362 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:12:45 crc kubenswrapper[4835]: I0201 08:12:45.567390 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:12:45 crc kubenswrapper[4835]: E0201 08:12:45.567581 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:12:47 crc kubenswrapper[4835]: I0201 08:12:47.575039 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:12:47 crc kubenswrapper[4835]: I0201 08:12:47.575542 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:12:47 crc kubenswrapper[4835]: I0201 08:12:47.575720 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:12:47 crc kubenswrapper[4835]: E0201 08:12:47.576170 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:12:48 crc kubenswrapper[4835]: I0201 08:12:48.566723 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:12:48 crc kubenswrapper[4835]: I0201 08:12:48.567095 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:12:48 crc kubenswrapper[4835]: I0201 08:12:48.567186 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:12:48 crc kubenswrapper[4835]: I0201 08:12:48.567295 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:12:48 crc kubenswrapper[4835]: I0201 08:12:48.567378 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:12:48 crc kubenswrapper[4835]: E0201 08:12:48.567785 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:12:50 crc kubenswrapper[4835]: I0201 08:12:50.569144 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:12:50 crc kubenswrapper[4835]: I0201 08:12:50.570314 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:12:50 crc kubenswrapper[4835]: I0201 08:12:50.570839 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:12:50 crc kubenswrapper[4835]: E0201 08:12:50.572026 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:12:52 crc kubenswrapper[4835]: I0201 08:12:52.566440 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:12:52 crc kubenswrapper[4835]: E0201 08:12:52.566811 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:12:55 crc kubenswrapper[4835]: I0201 08:12:55.567012 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:12:55 crc kubenswrapper[4835]: I0201 08:12:55.567311 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:55 crc kubenswrapper[4835]: E0201 08:12:55.567558 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:59 crc kubenswrapper[4835]: I0201 08:12:59.567337 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:12:59 crc kubenswrapper[4835]: I0201 08:12:59.567789 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:12:59 crc kubenswrapper[4835]: I0201 08:12:59.567907 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:12:59 crc kubenswrapper[4835]: E0201 08:12:59.568283 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:13:00 crc kubenswrapper[4835]: I0201 08:13:00.567046 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:13:00 crc kubenswrapper[4835]: I0201 08:13:00.567096 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:00 crc kubenswrapper[4835]: I0201 08:13:00.567960 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:13:00 crc kubenswrapper[4835]: I0201 08:13:00.568142 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:13:00 crc kubenswrapper[4835]: I0201 08:13:00.568192 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:13:00 crc kubenswrapper[4835]: I0201 08:13:00.568325 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:13:00 crc kubenswrapper[4835]: I0201 08:13:00.568395 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:13:00 crc kubenswrapper[4835]: E0201 08:13:00.569196 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:13:00 crc kubenswrapper[4835]: E0201 08:13:00.807284 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:01 crc kubenswrapper[4835]: I0201 08:13:01.541554 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"6a913b6ae50136af191cf5b4dbaef03f3230b919285acfb6297aab38c6ca55fa"} Feb 01 08:13:01 crc kubenswrapper[4835]: I0201 08:13:01.542510 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:01 crc kubenswrapper[4835]: I0201 08:13:01.542665 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:13:01 crc kubenswrapper[4835]: E0201 08:13:01.542855 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:02 crc kubenswrapper[4835]: I0201 08:13:02.553394 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:02 crc kubenswrapper[4835]: E0201 08:13:02.553816 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:04 crc kubenswrapper[4835]: I0201 08:13:04.567561 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:13:04 crc kubenswrapper[4835]: I0201 08:13:04.567770 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:13:04 crc kubenswrapper[4835]: I0201 08:13:04.568030 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:13:04 crc kubenswrapper[4835]: E0201 08:13:04.568632 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:13:06 crc kubenswrapper[4835]: I0201 08:13:06.540742 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:07 crc kubenswrapper[4835]: I0201 08:13:07.536875 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:07 crc kubenswrapper[4835]: I0201 08:13:07.574693 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:13:07 crc kubenswrapper[4835]: E0201 08:13:07.575047 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:13:07 crc kubenswrapper[4835]: I0201 08:13:07.602685 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:13:07 crc kubenswrapper[4835]: E0201 08:13:07.602937 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:13:07 crc kubenswrapper[4835]: E0201 08:13:07.603113 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:15:09.603079645 +0000 UTC m=+3182.723516089 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:13:09 crc kubenswrapper[4835]: I0201 08:13:09.537630 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:10 crc kubenswrapper[4835]: I0201 08:13:10.567672 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:13:10 crc kubenswrapper[4835]: I0201 08:13:10.568126 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:13:10 crc kubenswrapper[4835]: E0201 08:13:10.568563 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:13:12 crc kubenswrapper[4835]: I0201 08:13:12.537444 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:12 crc kubenswrapper[4835]: I0201 08:13:12.537928 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:12 crc kubenswrapper[4835]: I0201 08:13:12.537980 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:13:12 crc kubenswrapper[4835]: I0201 08:13:12.538691 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"6a913b6ae50136af191cf5b4dbaef03f3230b919285acfb6297aab38c6ca55fa"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:13:12 crc kubenswrapper[4835]: I0201 08:13:12.538724 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:12 crc kubenswrapper[4835]: I0201 08:13:12.538752 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://6a913b6ae50136af191cf5b4dbaef03f3230b919285acfb6297aab38c6ca55fa" gracePeriod=30 Feb 01 08:13:12 crc kubenswrapper[4835]: I0201 08:13:12.539881 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:12 crc kubenswrapper[4835]: E0201 08:13:12.833779 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:13 crc kubenswrapper[4835]: I0201 08:13:13.661848 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="6a913b6ae50136af191cf5b4dbaef03f3230b919285acfb6297aab38c6ca55fa" exitCode=0 Feb 01 08:13:13 crc kubenswrapper[4835]: I0201 08:13:13.661903 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"6a913b6ae50136af191cf5b4dbaef03f3230b919285acfb6297aab38c6ca55fa"} Feb 01 08:13:13 crc kubenswrapper[4835]: I0201 08:13:13.661933 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11"} Feb 01 08:13:13 crc kubenswrapper[4835]: I0201 08:13:13.661953 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:13:13 crc kubenswrapper[4835]: I0201 08:13:13.662592 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:13 crc kubenswrapper[4835]: E0201 08:13:13.662951 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:13 crc kubenswrapper[4835]: I0201 08:13:13.663135 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.567577 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.567927 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.567995 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.568027 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.568119 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.568167 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.568226 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.568261 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:13:14 crc kubenswrapper[4835]: E0201 08:13:14.568643 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:13:14 crc kubenswrapper[4835]: E0201 08:13:14.568697 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.672931 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:14 crc kubenswrapper[4835]: E0201 08:13:14.673152 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:15 crc kubenswrapper[4835]: I0201 08:13:15.568184 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:13:15 crc kubenswrapper[4835]: I0201 08:13:15.568271 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:13:15 crc kubenswrapper[4835]: I0201 08:13:15.568354 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:13:15 crc kubenswrapper[4835]: E0201 08:13:15.568692 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:13:17 crc kubenswrapper[4835]: I0201 08:13:17.538048 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:18 crc kubenswrapper[4835]: I0201 08:13:18.538040 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:18 crc kubenswrapper[4835]: I0201 08:13:18.567507 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:13:18 crc kubenswrapper[4835]: E0201 08:13:18.567800 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:13:21 crc kubenswrapper[4835]: I0201 08:13:21.539885 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:21 crc kubenswrapper[4835]: I0201 08:13:21.566729 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:13:21 crc kubenswrapper[4835]: I0201 08:13:21.566757 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:13:21 crc kubenswrapper[4835]: E0201 08:13:21.566985 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:13:22 crc kubenswrapper[4835]: I0201 08:13:22.537650 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.537731 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.537832 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.538786 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.538820 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.538859 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" gracePeriod=30 Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.539314 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:24 crc kubenswrapper[4835]: E0201 08:13:24.673937 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.767308 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" exitCode=0 Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.767344 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11"} Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.767781 4835 scope.go:117] "RemoveContainer" containerID="6a913b6ae50136af191cf5b4dbaef03f3230b919285acfb6297aab38c6ca55fa" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.768850 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.768923 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:24 crc kubenswrapper[4835]: E0201 08:13:24.769371 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:25 crc kubenswrapper[4835]: E0201 08:13:25.449466 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:13:25 crc kubenswrapper[4835]: I0201 08:13:25.781013 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:13:26 crc kubenswrapper[4835]: I0201 08:13:26.567605 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:13:26 crc kubenswrapper[4835]: I0201 08:13:26.567716 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:13:26 crc kubenswrapper[4835]: I0201 08:13:26.567761 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:13:26 crc kubenswrapper[4835]: I0201 08:13:26.567890 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:13:26 crc kubenswrapper[4835]: I0201 08:13:26.567955 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:13:26 crc kubenswrapper[4835]: E0201 08:13:26.568394 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:13:29 crc kubenswrapper[4835]: I0201 08:13:29.567368 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:13:29 crc kubenswrapper[4835]: I0201 08:13:29.568064 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:13:29 crc kubenswrapper[4835]: I0201 08:13:29.568191 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:13:29 crc kubenswrapper[4835]: I0201 08:13:29.568281 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:13:29 crc kubenswrapper[4835]: I0201 08:13:29.568439 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:13:29 crc kubenswrapper[4835]: I0201 08:13:29.568519 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:13:29 crc kubenswrapper[4835]: I0201 08:13:29.568616 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:13:29 crc kubenswrapper[4835]: E0201 08:13:29.568807 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:13:29 crc kubenswrapper[4835]: E0201 08:13:29.568921 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:13:29 crc kubenswrapper[4835]: E0201 08:13:29.568924 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:13:33 crc kubenswrapper[4835]: I0201 08:13:33.569468 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:13:33 crc kubenswrapper[4835]: I0201 08:13:33.571606 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:13:33 crc kubenswrapper[4835]: E0201 08:13:33.575016 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:13:37 crc kubenswrapper[4835]: I0201 08:13:37.571121 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:13:37 crc kubenswrapper[4835]: I0201 08:13:37.571450 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:37 crc kubenswrapper[4835]: E0201 08:13:37.571662 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:40 crc kubenswrapper[4835]: I0201 08:13:40.567300 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:13:40 crc kubenswrapper[4835]: I0201 08:13:40.567707 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:13:40 crc kubenswrapper[4835]: I0201 08:13:40.567740 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:13:40 crc kubenswrapper[4835]: I0201 08:13:40.567817 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:13:40 crc kubenswrapper[4835]: I0201 08:13:40.567860 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:13:40 crc kubenswrapper[4835]: E0201 08:13:40.568199 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:13:41 crc kubenswrapper[4835]: I0201 08:13:41.568042 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:13:41 crc kubenswrapper[4835]: I0201 08:13:41.569200 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:13:41 crc kubenswrapper[4835]: I0201 08:13:41.569400 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:13:41 crc kubenswrapper[4835]: E0201 08:13:41.569950 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:13:43 crc kubenswrapper[4835]: I0201 08:13:43.567503 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:13:43 crc kubenswrapper[4835]: I0201 08:13:43.567559 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:13:43 crc kubenswrapper[4835]: I0201 08:13:43.567643 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:13:43 crc kubenswrapper[4835]: E0201 08:13:43.567726 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:13:43 crc kubenswrapper[4835]: I0201 08:13:43.567772 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:13:43 crc kubenswrapper[4835]: E0201 08:13:43.568101 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:13:45 crc kubenswrapper[4835]: I0201 08:13:45.567791 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:13:45 crc kubenswrapper[4835]: I0201 08:13:45.568238 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:13:45 crc kubenswrapper[4835]: E0201 08:13:45.568699 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:13:49 crc kubenswrapper[4835]: I0201 08:13:49.567214 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:13:49 crc kubenswrapper[4835]: I0201 08:13:49.567624 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:49 crc kubenswrapper[4835]: E0201 08:13:49.567928 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:52 crc kubenswrapper[4835]: I0201 08:13:52.568321 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:13:52 crc kubenswrapper[4835]: I0201 08:13:52.569721 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:13:52 crc kubenswrapper[4835]: I0201 08:13:52.569801 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:13:52 crc kubenswrapper[4835]: I0201 08:13:52.569931 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:13:52 crc kubenswrapper[4835]: I0201 08:13:52.570000 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:13:52 crc kubenswrapper[4835]: E0201 08:13:52.570600 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:13:55 crc kubenswrapper[4835]: I0201 08:13:55.567563 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:13:55 crc kubenswrapper[4835]: I0201 08:13:55.567667 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:13:55 crc kubenswrapper[4835]: I0201 08:13:55.567776 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:13:55 crc kubenswrapper[4835]: E0201 08:13:55.568117 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:13:57 crc kubenswrapper[4835]: I0201 08:13:57.573400 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:13:57 crc kubenswrapper[4835]: I0201 08:13:57.573781 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:13:57 crc kubenswrapper[4835]: E0201 08:13:57.574126 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.072198 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" exitCode=1 Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.072262 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0"} Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.072306 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.073566 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.073705 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.073750 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.073893 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:13:58 crc kubenswrapper[4835]: E0201 08:13:58.074503 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.567046 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:13:58 crc kubenswrapper[4835]: E0201 08:13:58.567324 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.567860 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.567999 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.568229 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:13:58 crc kubenswrapper[4835]: E0201 08:13:58.568779 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:00 crc kubenswrapper[4835]: I0201 08:14:00.567003 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:14:00 crc kubenswrapper[4835]: I0201 08:14:00.567040 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:14:00 crc kubenswrapper[4835]: E0201 08:14:00.567241 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.120305 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" exitCode=1 Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.120386 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28"} Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.120812 4835 scope.go:117] "RemoveContainer" containerID="3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c" Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.121592 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.121698 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.121738 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.121823 4835 scope.go:117] "RemoveContainer" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.121857 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:14:01 crc kubenswrapper[4835]: E0201 08:14:01.122516 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:03 crc kubenswrapper[4835]: I0201 08:14:03.567970 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:14:03 crc kubenswrapper[4835]: I0201 08:14:03.568464 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:14:03 crc kubenswrapper[4835]: I0201 08:14:03.568507 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:14:03 crc kubenswrapper[4835]: I0201 08:14:03.568614 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:14:03 crc kubenswrapper[4835]: I0201 08:14:03.568671 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:14:03 crc kubenswrapper[4835]: E0201 08:14:03.569732 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:14:08 crc kubenswrapper[4835]: I0201 08:14:08.566967 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:14:08 crc kubenswrapper[4835]: I0201 08:14:08.567354 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:14:08 crc kubenswrapper[4835]: E0201 08:14:08.567727 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:14:10 crc kubenswrapper[4835]: I0201 08:14:10.567889 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:14:10 crc kubenswrapper[4835]: I0201 08:14:10.567975 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:14:10 crc kubenswrapper[4835]: I0201 08:14:10.568093 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:14:10 crc kubenswrapper[4835]: E0201 08:14:10.568400 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:12 crc kubenswrapper[4835]: I0201 08:14:12.566627 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:14:12 crc kubenswrapper[4835]: I0201 08:14:12.566958 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:14:12 crc kubenswrapper[4835]: E0201 08:14:12.567270 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:14:13 crc kubenswrapper[4835]: I0201 08:14:13.567342 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:14:13 crc kubenswrapper[4835]: E0201 08:14:13.567649 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:14:14 crc kubenswrapper[4835]: I0201 08:14:14.567498 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:14:14 crc kubenswrapper[4835]: I0201 08:14:14.567585 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:14:14 crc kubenswrapper[4835]: I0201 08:14:14.567614 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:14:14 crc kubenswrapper[4835]: I0201 08:14:14.567695 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:14:14 crc kubenswrapper[4835]: I0201 08:14:14.567738 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:14:14 crc kubenswrapper[4835]: E0201 08:14:14.568158 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:14:15 crc kubenswrapper[4835]: I0201 08:14:15.568971 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:14:15 crc kubenswrapper[4835]: I0201 08:14:15.569593 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:14:15 crc kubenswrapper[4835]: I0201 08:14:15.569640 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:15 crc kubenswrapper[4835]: I0201 08:14:15.569739 4835 scope.go:117] "RemoveContainer" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" Feb 01 08:14:15 crc kubenswrapper[4835]: I0201 08:14:15.569754 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:14:16 crc kubenswrapper[4835]: E0201 08:14:16.061611 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.278138 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" exitCode=1 Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.278191 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9"} Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.278219 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46"} Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.278233 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196"} Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.278255 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.279048 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.279183 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.279256 4835 scope.go:117] "RemoveContainer" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" Feb 01 08:14:16 crc kubenswrapper[4835]: E0201 08:14:16.279668 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.292927 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" exitCode=1 Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.292965 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" exitCode=1 Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.292986 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9"} Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.293019 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46"} Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.293043 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.293680 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.293746 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.293786 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.293844 4835 scope.go:117] "RemoveContainer" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.293854 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:14:17 crc kubenswrapper[4835]: E0201 08:14:17.294249 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.336156 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.313163 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" exitCode=1 Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.313236 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1"} Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.313741 4835 scope.go:117] "RemoveContainer" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.314887 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.315122 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.315185 4835 scope.go:117] "RemoveContainer" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.315326 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:14:18 crc kubenswrapper[4835]: E0201 08:14:18.316036 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.327818 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.327945 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.327988 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.328080 4835 scope.go:117] "RemoveContainer" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.328092 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:14:18 crc kubenswrapper[4835]: E0201 08:14:18.328674 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:19 crc kubenswrapper[4835]: I0201 08:14:19.567240 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:14:19 crc kubenswrapper[4835]: I0201 08:14:19.567282 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:14:19 crc kubenswrapper[4835]: E0201 08:14:19.567697 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:14:25 crc kubenswrapper[4835]: I0201 08:14:25.567642 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:14:25 crc kubenswrapper[4835]: I0201 08:14:25.568477 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:14:25 crc kubenswrapper[4835]: I0201 08:14:25.568568 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:14:25 crc kubenswrapper[4835]: I0201 08:14:25.568725 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:14:25 crc kubenswrapper[4835]: I0201 08:14:25.568795 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:14:25 crc kubenswrapper[4835]: E0201 08:14:25.569366 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:14:27 crc kubenswrapper[4835]: I0201 08:14:27.575791 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:14:27 crc kubenswrapper[4835]: I0201 08:14:27.575839 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:14:27 crc kubenswrapper[4835]: E0201 08:14:27.576202 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:14:28 crc kubenswrapper[4835]: I0201 08:14:28.566318 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:14:28 crc kubenswrapper[4835]: E0201 08:14:28.566932 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:14:29 crc kubenswrapper[4835]: I0201 08:14:29.568104 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:14:29 crc kubenswrapper[4835]: I0201 08:14:29.568181 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:14:29 crc kubenswrapper[4835]: I0201 08:14:29.568206 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:29 crc kubenswrapper[4835]: I0201 08:14:29.568257 4835 scope.go:117] "RemoveContainer" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" Feb 01 08:14:29 crc kubenswrapper[4835]: I0201 08:14:29.568263 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:14:29 crc kubenswrapper[4835]: E0201 08:14:29.568610 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:31 crc kubenswrapper[4835]: I0201 08:14:31.568987 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:14:31 crc kubenswrapper[4835]: I0201 08:14:31.569342 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:14:31 crc kubenswrapper[4835]: E0201 08:14:31.569912 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:14:33 crc kubenswrapper[4835]: I0201 08:14:33.567675 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:14:33 crc kubenswrapper[4835]: I0201 08:14:33.568295 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:14:33 crc kubenswrapper[4835]: I0201 08:14:33.568349 4835 scope.go:117] "RemoveContainer" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" Feb 01 08:14:33 crc kubenswrapper[4835]: I0201 08:14:33.568534 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:14:34 crc kubenswrapper[4835]: E0201 08:14:34.084295 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.510336 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="82af775abdc23c8025e4e12506f4fc3d3f06dcc9f90861bdc6638a928f4dae09" exitCode=1 Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.510368 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="7138874afd09789f5daadf9e71d0e0638e55d591b511edfee4ca6f574127ecbb" exitCode=1 Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.510386 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"6e565122ef462e611013566b06126639f064fbcfd638c2a4f4e7ea64feaa1587"} Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.510456 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"82af775abdc23c8025e4e12506f4fc3d3f06dcc9f90861bdc6638a928f4dae09"} Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.510476 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"7138874afd09789f5daadf9e71d0e0638e55d591b511edfee4ca6f574127ecbb"} Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.510499 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.515918 4835 scope.go:117] "RemoveContainer" containerID="7138874afd09789f5daadf9e71d0e0638e55d591b511edfee4ca6f574127ecbb" Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.516017 4835 scope.go:117] "RemoveContainer" containerID="82af775abdc23c8025e4e12506f4fc3d3f06dcc9f90861bdc6638a928f4dae09" Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.516044 4835 scope.go:117] "RemoveContainer" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" Feb 01 08:14:34 crc kubenswrapper[4835]: E0201 08:14:34.516628 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.575705 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:14:35 crc kubenswrapper[4835]: I0201 08:14:35.533743 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="6e565122ef462e611013566b06126639f064fbcfd638c2a4f4e7ea64feaa1587" exitCode=1 Feb 01 08:14:35 crc kubenswrapper[4835]: I0201 08:14:35.533783 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"6e565122ef462e611013566b06126639f064fbcfd638c2a4f4e7ea64feaa1587"} Feb 01 08:14:35 crc kubenswrapper[4835]: I0201 08:14:35.533826 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:14:35 crc kubenswrapper[4835]: I0201 08:14:35.534599 4835 scope.go:117] "RemoveContainer" containerID="7138874afd09789f5daadf9e71d0e0638e55d591b511edfee4ca6f574127ecbb" Feb 01 08:14:35 crc kubenswrapper[4835]: I0201 08:14:35.534682 4835 scope.go:117] "RemoveContainer" containerID="82af775abdc23c8025e4e12506f4fc3d3f06dcc9f90861bdc6638a928f4dae09" Feb 01 08:14:35 crc kubenswrapper[4835]: I0201 08:14:35.534710 4835 scope.go:117] "RemoveContainer" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" Feb 01 08:14:35 crc kubenswrapper[4835]: I0201 08:14:35.534786 4835 scope.go:117] "RemoveContainer" containerID="6e565122ef462e611013566b06126639f064fbcfd638c2a4f4e7ea64feaa1587" Feb 01 08:14:35 crc kubenswrapper[4835]: E0201 08:14:35.535300 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:38 crc kubenswrapper[4835]: I0201 08:14:38.567280 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:14:38 crc kubenswrapper[4835]: I0201 08:14:38.567605 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:14:38 crc kubenswrapper[4835]: I0201 08:14:38.567752 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:14:38 crc kubenswrapper[4835]: I0201 08:14:38.567835 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:14:38 crc kubenswrapper[4835]: I0201 08:14:38.567867 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:14:38 crc kubenswrapper[4835]: E0201 08:14:38.567900 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:14:38 crc kubenswrapper[4835]: I0201 08:14:38.567971 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:14:38 crc kubenswrapper[4835]: I0201 08:14:38.568023 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:14:38 crc kubenswrapper[4835]: E0201 08:14:38.568568 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:14:39 crc kubenswrapper[4835]: I0201 08:14:39.568108 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:14:39 crc kubenswrapper[4835]: E0201 08:14:39.568456 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.595625 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="0497b6fa825fe5c685a142a45b83cba6c78cee875feeb8c8d363023fb9cbab30" exitCode=1 Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.595732 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"0497b6fa825fe5c685a142a45b83cba6c78cee875feeb8c8d363023fb9cbab30"} Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.596040 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.597345 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.597546 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.597608 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.597742 4835 scope.go:117] "RemoveContainer" containerID="0497b6fa825fe5c685a142a45b83cba6c78cee875feeb8c8d363023fb9cbab30" Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.597790 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.597888 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:14:40 crc kubenswrapper[4835]: E0201 08:14:40.598687 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:14:41 crc kubenswrapper[4835]: I0201 08:14:41.568027 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:14:41 crc kubenswrapper[4835]: I0201 08:14:41.568112 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:14:41 crc kubenswrapper[4835]: I0201 08:14:41.568139 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:41 crc kubenswrapper[4835]: I0201 08:14:41.568194 4835 scope.go:117] "RemoveContainer" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" Feb 01 08:14:41 crc kubenswrapper[4835]: I0201 08:14:41.568202 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:14:41 crc kubenswrapper[4835]: E0201 08:14:41.763892 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:42 crc kubenswrapper[4835]: I0201 08:14:42.635391 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"1a7388e9a033acf55cdc32c414808c3f5eb1860ae550b0ea7c76774f61add823"} Feb 01 08:14:42 crc kubenswrapper[4835]: I0201 08:14:42.636100 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:14:42 crc kubenswrapper[4835]: I0201 08:14:42.636166 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:14:42 crc kubenswrapper[4835]: I0201 08:14:42.636187 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:42 crc kubenswrapper[4835]: I0201 08:14:42.636243 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:14:42 crc kubenswrapper[4835]: E0201 08:14:42.636560 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:45 crc kubenswrapper[4835]: I0201 08:14:45.567308 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:14:45 crc kubenswrapper[4835]: I0201 08:14:45.567747 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:14:45 crc kubenswrapper[4835]: E0201 08:14:45.568046 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:14:50 crc kubenswrapper[4835]: I0201 08:14:50.567166 4835 scope.go:117] "RemoveContainer" containerID="7138874afd09789f5daadf9e71d0e0638e55d591b511edfee4ca6f574127ecbb" Feb 01 08:14:50 crc kubenswrapper[4835]: I0201 08:14:50.567650 4835 scope.go:117] "RemoveContainer" containerID="82af775abdc23c8025e4e12506f4fc3d3f06dcc9f90861bdc6638a928f4dae09" Feb 01 08:14:50 crc kubenswrapper[4835]: I0201 08:14:50.567697 4835 scope.go:117] "RemoveContainer" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" Feb 01 08:14:50 crc kubenswrapper[4835]: I0201 08:14:50.567818 4835 scope.go:117] "RemoveContainer" containerID="6e565122ef462e611013566b06126639f064fbcfd638c2a4f4e7ea64feaa1587" Feb 01 08:14:50 crc kubenswrapper[4835]: E0201 08:14:50.568379 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:53 crc kubenswrapper[4835]: I0201 08:14:53.567403 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:14:53 crc kubenswrapper[4835]: I0201 08:14:53.568004 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:14:53 crc kubenswrapper[4835]: I0201 08:14:53.568061 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:14:53 crc kubenswrapper[4835]: I0201 08:14:53.568208 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:14:53 crc kubenswrapper[4835]: I0201 08:14:53.568272 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:53 crc kubenswrapper[4835]: E0201 08:14:53.568281 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:14:53 crc kubenswrapper[4835]: I0201 08:14:53.568461 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:14:53 crc kubenswrapper[4835]: E0201 08:14:53.569019 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:54 crc kubenswrapper[4835]: I0201 08:14:54.567545 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:14:54 crc kubenswrapper[4835]: E0201 08:14:54.567828 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:14:55 crc kubenswrapper[4835]: I0201 08:14:55.567850 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:14:55 crc kubenswrapper[4835]: I0201 08:14:55.568477 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:14:55 crc kubenswrapper[4835]: I0201 08:14:55.568541 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:14:55 crc kubenswrapper[4835]: I0201 08:14:55.568655 4835 scope.go:117] "RemoveContainer" containerID="0497b6fa825fe5c685a142a45b83cba6c78cee875feeb8c8d363023fb9cbab30" Feb 01 08:14:55 crc kubenswrapper[4835]: I0201 08:14:55.568673 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:14:55 crc kubenswrapper[4835]: I0201 08:14:55.568753 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:14:55 crc kubenswrapper[4835]: E0201 08:14:55.569497 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:14:56 crc kubenswrapper[4835]: I0201 08:14:56.787696 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="4c18a6c0ad7fc9f3254096d7bfa007b9115d0360f41fd74b092f41a03c6d622a" exitCode=1 Feb 01 08:14:56 crc kubenswrapper[4835]: I0201 08:14:56.787811 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"4c18a6c0ad7fc9f3254096d7bfa007b9115d0360f41fd74b092f41a03c6d622a"} Feb 01 08:14:56 crc kubenswrapper[4835]: I0201 08:14:56.788355 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:14:56 crc kubenswrapper[4835]: I0201 08:14:56.791398 4835 scope.go:117] "RemoveContainer" containerID="7138874afd09789f5daadf9e71d0e0638e55d591b511edfee4ca6f574127ecbb" Feb 01 08:14:56 crc kubenswrapper[4835]: I0201 08:14:56.791650 4835 scope.go:117] "RemoveContainer" containerID="82af775abdc23c8025e4e12506f4fc3d3f06dcc9f90861bdc6638a928f4dae09" Feb 01 08:14:56 crc kubenswrapper[4835]: I0201 08:14:56.791711 4835 scope.go:117] "RemoveContainer" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" Feb 01 08:14:56 crc kubenswrapper[4835]: I0201 08:14:56.791816 4835 scope.go:117] "RemoveContainer" containerID="4c18a6c0ad7fc9f3254096d7bfa007b9115d0360f41fd74b092f41a03c6d622a" Feb 01 08:14:56 crc kubenswrapper[4835]: I0201 08:14:56.791854 4835 scope.go:117] "RemoveContainer" containerID="6e565122ef462e611013566b06126639f064fbcfd638c2a4f4e7ea64feaa1587" Feb 01 08:14:56 crc kubenswrapper[4835]: E0201 08:14:56.795773 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:58 crc kubenswrapper[4835]: I0201 08:14:58.567702 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:14:58 crc kubenswrapper[4835]: I0201 08:14:58.568555 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:14:58 crc kubenswrapper[4835]: E0201 08:14:58.569385 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.151214 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62"] Feb 01 08:15:00 crc kubenswrapper[4835]: E0201 08:15:00.152195 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfdcbe67-d5e0-4882-b2d9-e039513a25f0" containerName="copy" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.152225 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfdcbe67-d5e0-4882-b2d9-e039513a25f0" containerName="copy" Feb 01 08:15:00 crc kubenswrapper[4835]: E0201 08:15:00.152270 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfdcbe67-d5e0-4882-b2d9-e039513a25f0" containerName="gather" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.152286 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfdcbe67-d5e0-4882-b2d9-e039513a25f0" containerName="gather" Feb 01 08:15:00 crc kubenswrapper[4835]: E0201 08:15:00.152355 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c90c5237-f023-4eab-b902-e86f65ad245e" containerName="registry-server" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.152378 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c90c5237-f023-4eab-b902-e86f65ad245e" containerName="registry-server" Feb 01 08:15:00 crc kubenswrapper[4835]: E0201 08:15:00.152476 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c90c5237-f023-4eab-b902-e86f65ad245e" containerName="extract-content" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.152495 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c90c5237-f023-4eab-b902-e86f65ad245e" containerName="extract-content" Feb 01 08:15:00 crc kubenswrapper[4835]: E0201 08:15:00.152548 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerName="registry-server" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.152565 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerName="registry-server" Feb 01 08:15:00 crc kubenswrapper[4835]: E0201 08:15:00.152600 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c90c5237-f023-4eab-b902-e86f65ad245e" containerName="extract-utilities" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.152617 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c90c5237-f023-4eab-b902-e86f65ad245e" containerName="extract-utilities" Feb 01 08:15:00 crc kubenswrapper[4835]: E0201 08:15:00.152636 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerName="extract-utilities" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.152651 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerName="extract-utilities" Feb 01 08:15:00 crc kubenswrapper[4835]: E0201 08:15:00.152680 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerName="extract-content" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.152696 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerName="extract-content" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.153044 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c90c5237-f023-4eab-b902-e86f65ad245e" containerName="registry-server" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.153140 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerName="registry-server" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.153191 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfdcbe67-d5e0-4882-b2d9-e039513a25f0" containerName="gather" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.153221 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfdcbe67-d5e0-4882-b2d9-e039513a25f0" containerName="copy" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.154298 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.156448 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.157453 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.163198 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62"] Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.229939 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq2kx\" (UniqueName: \"kubernetes.io/projected/ae05491b-9c88-45a4-945e-931aba8017d9-kube-api-access-cq2kx\") pod \"collect-profiles-29498895-dnh62\" (UID: \"ae05491b-9c88-45a4-945e-931aba8017d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.230106 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae05491b-9c88-45a4-945e-931aba8017d9-config-volume\") pod \"collect-profiles-29498895-dnh62\" (UID: \"ae05491b-9c88-45a4-945e-931aba8017d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.230142 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae05491b-9c88-45a4-945e-931aba8017d9-secret-volume\") pod \"collect-profiles-29498895-dnh62\" (UID: \"ae05491b-9c88-45a4-945e-931aba8017d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.330976 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae05491b-9c88-45a4-945e-931aba8017d9-config-volume\") pod \"collect-profiles-29498895-dnh62\" (UID: \"ae05491b-9c88-45a4-945e-931aba8017d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.331030 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae05491b-9c88-45a4-945e-931aba8017d9-secret-volume\") pod \"collect-profiles-29498895-dnh62\" (UID: \"ae05491b-9c88-45a4-945e-931aba8017d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.331101 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq2kx\" (UniqueName: \"kubernetes.io/projected/ae05491b-9c88-45a4-945e-931aba8017d9-kube-api-access-cq2kx\") pod \"collect-profiles-29498895-dnh62\" (UID: \"ae05491b-9c88-45a4-945e-931aba8017d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.332911 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae05491b-9c88-45a4-945e-931aba8017d9-config-volume\") pod \"collect-profiles-29498895-dnh62\" (UID: \"ae05491b-9c88-45a4-945e-931aba8017d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.340899 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae05491b-9c88-45a4-945e-931aba8017d9-secret-volume\") pod \"collect-profiles-29498895-dnh62\" (UID: \"ae05491b-9c88-45a4-945e-931aba8017d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.353624 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq2kx\" (UniqueName: \"kubernetes.io/projected/ae05491b-9c88-45a4-945e-931aba8017d9-kube-api-access-cq2kx\") pod \"collect-profiles-29498895-dnh62\" (UID: \"ae05491b-9c88-45a4-945e-931aba8017d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.489509 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.773268 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62"] Feb 01 08:15:00 crc kubenswrapper[4835]: I0201 08:15:00.850001 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" event={"ID":"ae05491b-9c88-45a4-945e-931aba8017d9","Type":"ContainerStarted","Data":"5d1c9a203b7c647c0c31d2b9ed37cc26f8377786c5078c62cbfd6ffc113e9e96"} Feb 01 08:15:01 crc kubenswrapper[4835]: I0201 08:15:01.861291 4835 generic.go:334] "Generic (PLEG): container finished" podID="ae05491b-9c88-45a4-945e-931aba8017d9" containerID="4872fbf7f3f3490812a90da1a553f5934519e0744c39b4125550102ace63aed7" exitCode=0 Feb 01 08:15:01 crc kubenswrapper[4835]: I0201 08:15:01.861367 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" event={"ID":"ae05491b-9c88-45a4-945e-931aba8017d9","Type":"ContainerDied","Data":"4872fbf7f3f3490812a90da1a553f5934519e0744c39b4125550102ace63aed7"} Feb 01 08:15:03 crc kubenswrapper[4835]: I0201 08:15:03.187246 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" Feb 01 08:15:03 crc kubenswrapper[4835]: I0201 08:15:03.302252 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae05491b-9c88-45a4-945e-931aba8017d9-config-volume\") pod \"ae05491b-9c88-45a4-945e-931aba8017d9\" (UID: \"ae05491b-9c88-45a4-945e-931aba8017d9\") " Feb 01 08:15:03 crc kubenswrapper[4835]: I0201 08:15:03.302360 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq2kx\" (UniqueName: \"kubernetes.io/projected/ae05491b-9c88-45a4-945e-931aba8017d9-kube-api-access-cq2kx\") pod \"ae05491b-9c88-45a4-945e-931aba8017d9\" (UID: \"ae05491b-9c88-45a4-945e-931aba8017d9\") " Feb 01 08:15:03 crc kubenswrapper[4835]: I0201 08:15:03.302400 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae05491b-9c88-45a4-945e-931aba8017d9-secret-volume\") pod \"ae05491b-9c88-45a4-945e-931aba8017d9\" (UID: \"ae05491b-9c88-45a4-945e-931aba8017d9\") " Feb 01 08:15:03 crc kubenswrapper[4835]: I0201 08:15:03.303069 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae05491b-9c88-45a4-945e-931aba8017d9-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae05491b-9c88-45a4-945e-931aba8017d9" (UID: "ae05491b-9c88-45a4-945e-931aba8017d9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 08:15:03 crc kubenswrapper[4835]: I0201 08:15:03.308877 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae05491b-9c88-45a4-945e-931aba8017d9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ae05491b-9c88-45a4-945e-931aba8017d9" (UID: "ae05491b-9c88-45a4-945e-931aba8017d9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 08:15:03 crc kubenswrapper[4835]: I0201 08:15:03.309841 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae05491b-9c88-45a4-945e-931aba8017d9-kube-api-access-cq2kx" (OuterVolumeSpecName: "kube-api-access-cq2kx") pod "ae05491b-9c88-45a4-945e-931aba8017d9" (UID: "ae05491b-9c88-45a4-945e-931aba8017d9"). InnerVolumeSpecName "kube-api-access-cq2kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:15:03 crc kubenswrapper[4835]: I0201 08:15:03.403868 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq2kx\" (UniqueName: \"kubernetes.io/projected/ae05491b-9c88-45a4-945e-931aba8017d9-kube-api-access-cq2kx\") on node \"crc\" DevicePath \"\"" Feb 01 08:15:03 crc kubenswrapper[4835]: I0201 08:15:03.403901 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae05491b-9c88-45a4-945e-931aba8017d9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 08:15:03 crc kubenswrapper[4835]: I0201 08:15:03.403914 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae05491b-9c88-45a4-945e-931aba8017d9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 08:15:03 crc kubenswrapper[4835]: I0201 08:15:03.885569 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" event={"ID":"ae05491b-9c88-45a4-945e-931aba8017d9","Type":"ContainerDied","Data":"5d1c9a203b7c647c0c31d2b9ed37cc26f8377786c5078c62cbfd6ffc113e9e96"} Feb 01 08:15:03 crc kubenswrapper[4835]: I0201 08:15:03.885626 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d1c9a203b7c647c0c31d2b9ed37cc26f8377786c5078c62cbfd6ffc113e9e96" Feb 01 08:15:03 crc kubenswrapper[4835]: I0201 08:15:03.885682 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498895-dnh62" Feb 01 08:15:04 crc kubenswrapper[4835]: I0201 08:15:04.252131 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z"] Feb 01 08:15:04 crc kubenswrapper[4835]: I0201 08:15:04.257597 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z"] Feb 01 08:15:05 crc kubenswrapper[4835]: I0201 08:15:05.568266 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:15:05 crc kubenswrapper[4835]: I0201 08:15:05.568306 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:15:05 crc kubenswrapper[4835]: I0201 08:15:05.568750 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:15:05 crc kubenswrapper[4835]: I0201 08:15:05.568909 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:15:05 crc kubenswrapper[4835]: I0201 08:15:05.568976 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:15:05 crc kubenswrapper[4835]: I0201 08:15:05.569127 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:15:05 crc kubenswrapper[4835]: E0201 08:15:05.569710 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:15:05 crc kubenswrapper[4835]: I0201 08:15:05.590336 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3f2951-1c06-484a-9c2e-502d2adaa6cd" path="/var/lib/kubelet/pods/2a3f2951-1c06-484a-9c2e-502d2adaa6cd/volumes" Feb 01 08:15:05 crc kubenswrapper[4835]: E0201 08:15:05.761448 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:15:05 crc kubenswrapper[4835]: I0201 08:15:05.919505 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"5a734e6ceee9575f66d97851e4425efefa4f55bf008553bf8d7e7711cbac90fb"} Feb 01 08:15:05 crc kubenswrapper[4835]: I0201 08:15:05.921856 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:15:05 crc kubenswrapper[4835]: I0201 08:15:05.922585 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:15:05 crc kubenswrapper[4835]: E0201 08:15:05.922767 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:15:06 crc kubenswrapper[4835]: I0201 08:15:06.934701 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="5a734e6ceee9575f66d97851e4425efefa4f55bf008553bf8d7e7711cbac90fb" exitCode=1 Feb 01 08:15:06 crc kubenswrapper[4835]: I0201 08:15:06.934768 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"5a734e6ceee9575f66d97851e4425efefa4f55bf008553bf8d7e7711cbac90fb"} Feb 01 08:15:06 crc kubenswrapper[4835]: I0201 08:15:06.934830 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:15:06 crc kubenswrapper[4835]: I0201 08:15:06.935749 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:15:06 crc kubenswrapper[4835]: I0201 08:15:06.935782 4835 scope.go:117] "RemoveContainer" containerID="5a734e6ceee9575f66d97851e4425efefa4f55bf008553bf8d7e7711cbac90fb" Feb 01 08:15:06 crc kubenswrapper[4835]: E0201 08:15:06.936276 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:15:07 crc kubenswrapper[4835]: I0201 08:15:07.946589 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:15:07 crc kubenswrapper[4835]: I0201 08:15:07.946981 4835 scope.go:117] "RemoveContainer" containerID="5a734e6ceee9575f66d97851e4425efefa4f55bf008553bf8d7e7711cbac90fb" Feb 01 08:15:07 crc kubenswrapper[4835]: E0201 08:15:07.947645 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:15:08 crc kubenswrapper[4835]: I0201 08:15:08.566690 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:15:08 crc kubenswrapper[4835]: E0201 08:15:08.567089 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:15:08 crc kubenswrapper[4835]: I0201 08:15:08.567921 4835 scope.go:117] "RemoveContainer" containerID="7138874afd09789f5daadf9e71d0e0638e55d591b511edfee4ca6f574127ecbb" Feb 01 08:15:08 crc kubenswrapper[4835]: I0201 08:15:08.568292 4835 scope.go:117] "RemoveContainer" containerID="82af775abdc23c8025e4e12506f4fc3d3f06dcc9f90861bdc6638a928f4dae09" Feb 01 08:15:08 crc kubenswrapper[4835]: I0201 08:15:08.568601 4835 scope.go:117] "RemoveContainer" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" Feb 01 08:15:08 crc kubenswrapper[4835]: I0201 08:15:08.568901 4835 scope.go:117] "RemoveContainer" containerID="4c18a6c0ad7fc9f3254096d7bfa007b9115d0360f41fd74b092f41a03c6d622a" Feb 01 08:15:08 crc kubenswrapper[4835]: I0201 08:15:08.569089 4835 scope.go:117] "RemoveContainer" containerID="6e565122ef462e611013566b06126639f064fbcfd638c2a4f4e7ea64feaa1587" Feb 01 08:15:08 crc kubenswrapper[4835]: E0201 08:15:08.570258 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:15:09 crc kubenswrapper[4835]: I0201 08:15:09.405618 4835 scope.go:117] "RemoveContainer" containerID="6a8f3f0f8045324c04ea0f25d07e785228bc538f428f47c8c77a96101a2d3e96" Feb 01 08:15:09 crc kubenswrapper[4835]: I0201 08:15:09.535979 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:15:09 crc kubenswrapper[4835]: I0201 08:15:09.537053 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:15:09 crc kubenswrapper[4835]: I0201 08:15:09.537080 4835 scope.go:117] "RemoveContainer" containerID="5a734e6ceee9575f66d97851e4425efefa4f55bf008553bf8d7e7711cbac90fb" Feb 01 08:15:09 crc kubenswrapper[4835]: E0201 08:15:09.537374 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:15:09 crc kubenswrapper[4835]: I0201 08:15:09.567233 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:15:09 crc kubenswrapper[4835]: I0201 08:15:09.567325 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:15:09 crc kubenswrapper[4835]: I0201 08:15:09.567355 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:15:09 crc kubenswrapper[4835]: I0201 08:15:09.567455 4835 scope.go:117] "RemoveContainer" containerID="0497b6fa825fe5c685a142a45b83cba6c78cee875feeb8c8d363023fb9cbab30" Feb 01 08:15:09 crc kubenswrapper[4835]: I0201 08:15:09.567468 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:15:09 crc kubenswrapper[4835]: I0201 08:15:09.567512 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:15:09 crc kubenswrapper[4835]: I0201 08:15:09.606365 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:15:09 crc kubenswrapper[4835]: E0201 08:15:09.606552 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:15:09 crc kubenswrapper[4835]: E0201 08:15:09.606630 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:17:11.606607405 +0000 UTC m=+3304.727043869 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:15:09 crc kubenswrapper[4835]: I0201 08:15:09.993007 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"0c3e0a3e9597bdb155921635fdb892ab02152d0aeb7f34fbd009a8180432bef2"} Feb 01 08:15:09 crc kubenswrapper[4835]: I0201 08:15:09.993064 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"9df23d40c0522f9586991535ae0e54bc92c2def3672f8bd8120d7bdaac2091c3"} Feb 01 08:15:10 crc kubenswrapper[4835]: E0201 08:15:10.288031 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.016849 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="0c3e0a3e9597bdb155921635fdb892ab02152d0aeb7f34fbd009a8180432bef2" exitCode=1 Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.016904 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="9df23d40c0522f9586991535ae0e54bc92c2def3672f8bd8120d7bdaac2091c3" exitCode=1 Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.016921 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="6a8f89bcd83bad633166d8bc4250fb546b85459bd2de617d314df92d37d4966e" exitCode=1 Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.016935 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="e823eeb91d8ec736e938d298ca6066a41830db9f28999233c2c9c9788a59649f" exitCode=1 Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.016965 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"0c3e0a3e9597bdb155921635fdb892ab02152d0aeb7f34fbd009a8180432bef2"} Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.017003 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"9df23d40c0522f9586991535ae0e54bc92c2def3672f8bd8120d7bdaac2091c3"} Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.017024 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"6a8f89bcd83bad633166d8bc4250fb546b85459bd2de617d314df92d37d4966e"} Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.017041 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"e823eeb91d8ec736e938d298ca6066a41830db9f28999233c2c9c9788a59649f"} Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.017066 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.017722 4835 scope.go:117] "RemoveContainer" containerID="9df23d40c0522f9586991535ae0e54bc92c2def3672f8bd8120d7bdaac2091c3" Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.017814 4835 scope.go:117] "RemoveContainer" containerID="0c3e0a3e9597bdb155921635fdb892ab02152d0aeb7f34fbd009a8180432bef2" Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.017856 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.017920 4835 scope.go:117] "RemoveContainer" containerID="0497b6fa825fe5c685a142a45b83cba6c78cee875feeb8c8d363023fb9cbab30" Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.017928 4835 scope.go:117] "RemoveContainer" containerID="e823eeb91d8ec736e938d298ca6066a41830db9f28999233c2c9c9788a59649f" Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.017988 4835 scope.go:117] "RemoveContainer" containerID="6a8f89bcd83bad633166d8bc4250fb546b85459bd2de617d314df92d37d4966e" Feb 01 08:15:11 crc kubenswrapper[4835]: E0201 08:15:11.018376 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.100530 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.152858 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:15:11 crc kubenswrapper[4835]: I0201 08:15:11.196606 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:15:12 crc kubenswrapper[4835]: I0201 08:15:12.041341 4835 scope.go:117] "RemoveContainer" containerID="9df23d40c0522f9586991535ae0e54bc92c2def3672f8bd8120d7bdaac2091c3" Feb 01 08:15:12 crc kubenswrapper[4835]: I0201 08:15:12.041456 4835 scope.go:117] "RemoveContainer" containerID="0c3e0a3e9597bdb155921635fdb892ab02152d0aeb7f34fbd009a8180432bef2" Feb 01 08:15:12 crc kubenswrapper[4835]: I0201 08:15:12.041488 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:15:12 crc kubenswrapper[4835]: I0201 08:15:12.041550 4835 scope.go:117] "RemoveContainer" containerID="0497b6fa825fe5c685a142a45b83cba6c78cee875feeb8c8d363023fb9cbab30" Feb 01 08:15:12 crc kubenswrapper[4835]: I0201 08:15:12.041558 4835 scope.go:117] "RemoveContainer" containerID="e823eeb91d8ec736e938d298ca6066a41830db9f28999233c2c9c9788a59649f" Feb 01 08:15:12 crc kubenswrapper[4835]: I0201 08:15:12.041600 4835 scope.go:117] "RemoveContainer" containerID="6a8f89bcd83bad633166d8bc4250fb546b85459bd2de617d314df92d37d4966e" Feb 01 08:15:12 crc kubenswrapper[4835]: E0201 08:15:12.042036 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:15:12 crc kubenswrapper[4835]: I0201 08:15:12.566963 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:15:12 crc kubenswrapper[4835]: I0201 08:15:12.567010 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:15:12 crc kubenswrapper[4835]: E0201 08:15:12.755940 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:15:13 crc kubenswrapper[4835]: I0201 08:15:13.067616 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"ad783eb6fe4c4812c231fb19a0fa9ce96e0e11c5cf2c5da4053aa0f294be2bad"} Feb 01 08:15:13 crc kubenswrapper[4835]: I0201 08:15:13.068145 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:15:13 crc kubenswrapper[4835]: I0201 08:15:13.068709 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:15:13 crc kubenswrapper[4835]: E0201 08:15:13.069155 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:15:14 crc kubenswrapper[4835]: I0201 08:15:14.081776 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="ad783eb6fe4c4812c231fb19a0fa9ce96e0e11c5cf2c5da4053aa0f294be2bad" exitCode=1 Feb 01 08:15:14 crc kubenswrapper[4835]: I0201 08:15:14.081903 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"ad783eb6fe4c4812c231fb19a0fa9ce96e0e11c5cf2c5da4053aa0f294be2bad"} Feb 01 08:15:14 crc kubenswrapper[4835]: I0201 08:15:14.082552 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:15:14 crc kubenswrapper[4835]: I0201 08:15:14.082791 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:15:14 crc kubenswrapper[4835]: I0201 08:15:14.082836 4835 scope.go:117] "RemoveContainer" containerID="ad783eb6fe4c4812c231fb19a0fa9ce96e0e11c5cf2c5da4053aa0f294be2bad" Feb 01 08:15:14 crc kubenswrapper[4835]: E0201 08:15:14.083540 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:15:15 crc kubenswrapper[4835]: I0201 08:15:15.097586 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:15:15 crc kubenswrapper[4835]: I0201 08:15:15.097640 4835 scope.go:117] "RemoveContainer" containerID="ad783eb6fe4c4812c231fb19a0fa9ce96e0e11c5cf2c5da4053aa0f294be2bad" Feb 01 08:15:15 crc kubenswrapper[4835]: E0201 08:15:15.098074 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:15:16 crc kubenswrapper[4835]: I0201 08:15:16.019241 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:15:16 crc kubenswrapper[4835]: I0201 08:15:16.119635 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:15:16 crc kubenswrapper[4835]: I0201 08:15:16.120091 4835 scope.go:117] "RemoveContainer" containerID="ad783eb6fe4c4812c231fb19a0fa9ce96e0e11c5cf2c5da4053aa0f294be2bad" Feb 01 08:15:16 crc kubenswrapper[4835]: E0201 08:15:16.120596 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:15:17 crc kubenswrapper[4835]: I0201 08:15:17.577529 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:15:17 crc kubenswrapper[4835]: I0201 08:15:17.577736 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:15:17 crc kubenswrapper[4835]: I0201 08:15:17.577801 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:15:17 crc kubenswrapper[4835]: I0201 08:15:17.577943 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:15:17 crc kubenswrapper[4835]: E0201 08:15:17.578586 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:15:20 crc kubenswrapper[4835]: I0201 08:15:20.567143 4835 scope.go:117] "RemoveContainer" containerID="7138874afd09789f5daadf9e71d0e0638e55d591b511edfee4ca6f574127ecbb" Feb 01 08:15:20 crc kubenswrapper[4835]: I0201 08:15:20.567657 4835 scope.go:117] "RemoveContainer" containerID="82af775abdc23c8025e4e12506f4fc3d3f06dcc9f90861bdc6638a928f4dae09" Feb 01 08:15:20 crc kubenswrapper[4835]: I0201 08:15:20.567697 4835 scope.go:117] "RemoveContainer" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" Feb 01 08:15:20 crc kubenswrapper[4835]: I0201 08:15:20.567743 4835 scope.go:117] "RemoveContainer" containerID="4c18a6c0ad7fc9f3254096d7bfa007b9115d0360f41fd74b092f41a03c6d622a" Feb 01 08:15:20 crc kubenswrapper[4835]: I0201 08:15:20.567750 4835 scope.go:117] "RemoveContainer" containerID="6e565122ef462e611013566b06126639f064fbcfd638c2a4f4e7ea64feaa1587" Feb 01 08:15:20 crc kubenswrapper[4835]: E0201 08:15:20.568077 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:15:21 crc kubenswrapper[4835]: I0201 08:15:21.567590 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:15:21 crc kubenswrapper[4835]: E0201 08:15:21.567870 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:15:24 crc kubenswrapper[4835]: I0201 08:15:24.567608 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:15:24 crc kubenswrapper[4835]: I0201 08:15:24.567973 4835 scope.go:117] "RemoveContainer" containerID="5a734e6ceee9575f66d97851e4425efefa4f55bf008553bf8d7e7711cbac90fb" Feb 01 08:15:24 crc kubenswrapper[4835]: E0201 08:15:24.568348 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:15:24 crc kubenswrapper[4835]: I0201 08:15:24.568704 4835 scope.go:117] "RemoveContainer" containerID="9df23d40c0522f9586991535ae0e54bc92c2def3672f8bd8120d7bdaac2091c3" Feb 01 08:15:24 crc kubenswrapper[4835]: I0201 08:15:24.568807 4835 scope.go:117] "RemoveContainer" containerID="0c3e0a3e9597bdb155921635fdb892ab02152d0aeb7f34fbd009a8180432bef2" Feb 01 08:15:24 crc kubenswrapper[4835]: I0201 08:15:24.568870 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:15:24 crc kubenswrapper[4835]: I0201 08:15:24.568943 4835 scope.go:117] "RemoveContainer" containerID="0497b6fa825fe5c685a142a45b83cba6c78cee875feeb8c8d363023fb9cbab30" Feb 01 08:15:24 crc kubenswrapper[4835]: I0201 08:15:24.568956 4835 scope.go:117] "RemoveContainer" containerID="e823eeb91d8ec736e938d298ca6066a41830db9f28999233c2c9c9788a59649f" Feb 01 08:15:24 crc kubenswrapper[4835]: I0201 08:15:24.569048 4835 scope.go:117] "RemoveContainer" containerID="6a8f89bcd83bad633166d8bc4250fb546b85459bd2de617d314df92d37d4966e" Feb 01 08:15:24 crc kubenswrapper[4835]: E0201 08:15:24.569478 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:15:28 crc kubenswrapper[4835]: I0201 08:15:28.567133 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:15:28 crc kubenswrapper[4835]: I0201 08:15:28.567572 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:15:28 crc kubenswrapper[4835]: I0201 08:15:28.567601 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:15:28 crc kubenswrapper[4835]: I0201 08:15:28.567687 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:15:28 crc kubenswrapper[4835]: E0201 08:15:28.568019 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:15:28 crc kubenswrapper[4835]: E0201 08:15:28.783175 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:15:29 crc kubenswrapper[4835]: I0201 08:15:29.231633 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:15:30 crc kubenswrapper[4835]: I0201 08:15:30.566968 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:15:30 crc kubenswrapper[4835]: I0201 08:15:30.567387 4835 scope.go:117] "RemoveContainer" containerID="ad783eb6fe4c4812c231fb19a0fa9ce96e0e11c5cf2c5da4053aa0f294be2bad" Feb 01 08:15:30 crc kubenswrapper[4835]: E0201 08:15:30.567855 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137605655024462 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137605656017400 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137577176016530 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137577177015501 5ustar corecore